00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 496 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3161 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.411 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.422 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.436 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.436 > git config core.sparsecheckout # timeout=10 00:00:05.447 > git read-tree -mu HEAD # timeout=10 00:00:05.463 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.482 Commit message: "pool: fixes for VisualBuild class" 00:00:05.482 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.584 [Pipeline] Start of Pipeline 00:00:05.600 [Pipeline] library 00:00:05.603 Loading library shm_lib@master 00:00:05.603 Library shm_lib@master is cached. Copying from home. 00:00:05.620 [Pipeline] node 00:00:05.628 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.630 [Pipeline] { 00:00:05.641 [Pipeline] catchError 00:00:05.644 [Pipeline] { 00:00:05.657 [Pipeline] wrap 00:00:05.666 [Pipeline] { 00:00:05.673 [Pipeline] stage 00:00:05.674 [Pipeline] { (Prologue) 00:00:05.855 [Pipeline] sh 00:00:06.141 + logger -p user.info -t JENKINS-CI 00:00:06.163 [Pipeline] echo 00:00:06.165 Node: CYP12 00:00:06.174 [Pipeline] sh 00:00:06.479 [Pipeline] setCustomBuildProperty 00:00:06.492 [Pipeline] echo 00:00:06.494 Cleanup processes 00:00:06.500 [Pipeline] sh 00:00:06.789 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.789 2499640 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.801 [Pipeline] sh 00:00:07.086 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.086 ++ grep -v 'sudo pgrep' 00:00:07.086 ++ awk '{print $1}' 00:00:07.086 + sudo kill -9 00:00:07.086 + true 00:00:07.100 [Pipeline] cleanWs 00:00:07.109 [WS-CLEANUP] Deleting project workspace... 00:00:07.109 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.117 [WS-CLEANUP] done 00:00:07.121 [Pipeline] setCustomBuildProperty 00:00:07.134 [Pipeline] sh 00:00:07.417 + sudo git config --global --replace-all safe.directory '*' 00:00:07.487 [Pipeline] nodesByLabel 00:00:07.489 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.497 [Pipeline] httpRequest 00:00:07.501 HttpMethod: GET 00:00:07.502 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.505 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.519 Response Code: HTTP/1.1 200 OK 00:00:07.519 Success: Status code 200 is in the accepted range: 200,404 00:00:07.520 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.205 [Pipeline] sh 00:00:10.490 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.508 [Pipeline] httpRequest 00:00:10.513 HttpMethod: GET 00:00:10.514 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:10.514 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:10.525 Response Code: HTTP/1.1 200 OK 00:00:10.525 Success: Status code 200 is in the accepted range: 200,404 00:00:10.526 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:14.957 [Pipeline] sh 00:01:15.245 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:18.556 [Pipeline] sh 00:01:18.842 + git -C spdk log --oneline -n5 00:01:18.842 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:18.842 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:18.842 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:18.842 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:18.842 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:18.861 [Pipeline] withCredentials 00:01:18.871 > git --version # timeout=10 00:01:18.885 > git --version # 'git version 2.39.2' 00:01:18.903 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:18.905 [Pipeline] { 00:01:18.914 [Pipeline] retry 00:01:18.916 [Pipeline] { 00:01:18.933 [Pipeline] sh 00:01:19.219 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:19.492 [Pipeline] } 00:01:19.515 [Pipeline] // retry 00:01:19.520 [Pipeline] } 00:01:19.541 [Pipeline] // withCredentials 00:01:19.552 [Pipeline] httpRequest 00:01:19.557 HttpMethod: GET 00:01:19.558 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.558 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:19.574 Response Code: HTTP/1.1 200 OK 00:01:19.574 Success: Status code 200 is in the accepted range: 200,404 00:01:19.575 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.893 [Pipeline] sh 00:01:32.183 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.132 [Pipeline] sh 00:01:34.479 + git -C dpdk log --oneline -n5 00:01:34.479 eeb0605f11 version: 23.11.0 00:01:34.479 238778122a doc: update release notes for 23.11 00:01:34.479 46aa6b3cfc doc: fix description of RSS features 00:01:34.479 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:34.479 7e421ae345 devtools: support skipping forbid rule check 00:01:34.491 [Pipeline] } 00:01:34.510 [Pipeline] // stage 00:01:34.520 [Pipeline] stage 00:01:34.523 [Pipeline] { (Prepare) 00:01:34.550 [Pipeline] writeFile 00:01:34.568 [Pipeline] sh 00:01:34.851 + logger -p user.info -t JENKINS-CI 00:01:34.865 [Pipeline] sh 00:01:35.151 + logger -p user.info -t JENKINS-CI 00:01:35.165 [Pipeline] sh 00:01:35.453 + cat autorun-spdk.conf 00:01:35.453 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.453 SPDK_TEST_NVMF=1 00:01:35.453 SPDK_TEST_NVME_CLI=1 00:01:35.453 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.453 SPDK_TEST_NVMF_NICS=e810 00:01:35.453 SPDK_TEST_VFIOUSER=1 00:01:35.453 SPDK_RUN_UBSAN=1 00:01:35.453 NET_TYPE=phy 00:01:35.453 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.453 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.461 RUN_NIGHTLY=1 00:01:35.466 [Pipeline] readFile 00:01:35.495 [Pipeline] withEnv 00:01:35.497 [Pipeline] { 00:01:35.513 [Pipeline] sh 00:01:35.803 + set -ex 00:01:35.803 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.803 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.803 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.803 ++ SPDK_TEST_NVMF=1 00:01:35.803 ++ SPDK_TEST_NVME_CLI=1 00:01:35.803 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.803 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.803 ++ SPDK_TEST_VFIOUSER=1 00:01:35.803 ++ SPDK_RUN_UBSAN=1 00:01:35.803 ++ NET_TYPE=phy 00:01:35.803 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.803 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.803 ++ RUN_NIGHTLY=1 00:01:35.803 + case $SPDK_TEST_NVMF_NICS in 00:01:35.803 + DRIVERS=ice 00:01:35.803 + [[ tcp == \r\d\m\a ]] 00:01:35.803 + [[ -n ice ]] 00:01:35.803 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.803 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:43.936 rmmod: ERROR: Module irdma is not currently loaded 00:01:43.936 rmmod: ERROR: Module i40iw is not currently loaded 00:01:43.936 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:43.936 + true 00:01:43.936 + for D in $DRIVERS 00:01:43.936 + sudo modprobe ice 00:01:43.936 + exit 0 00:01:43.944 [Pipeline] } 00:01:43.953 [Pipeline] // withEnv 00:01:43.957 [Pipeline] } 00:01:43.971 [Pipeline] // stage 00:01:43.976 [Pipeline] catchError 00:01:43.977 [Pipeline] { 00:01:43.987 [Pipeline] timeout 00:01:43.988 Timeout set to expire in 50 min 00:01:43.989 [Pipeline] { 00:01:44.003 [Pipeline] stage 00:01:44.004 [Pipeline] { (Tests) 00:01:44.020 [Pipeline] sh 00:01:44.306 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.306 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.306 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.306 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.306 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.306 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.306 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.306 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.306 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.306 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.306 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.306 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.306 + source /etc/os-release 00:01:44.306 ++ NAME='Fedora Linux' 00:01:44.306 ++ VERSION='38 (Cloud Edition)' 00:01:44.306 ++ ID=fedora 00:01:44.306 ++ VERSION_ID=38 00:01:44.306 ++ VERSION_CODENAME= 00:01:44.306 ++ PLATFORM_ID=platform:f38 00:01:44.306 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:44.306 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.306 ++ LOGO=fedora-logo-icon 00:01:44.306 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:44.306 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.306 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:44.306 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.306 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.306 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.306 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:44.306 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.306 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:44.306 ++ SUPPORT_END=2024-05-14 00:01:44.306 ++ VARIANT='Cloud Edition' 00:01:44.306 ++ VARIANT_ID=cloud 00:01:44.306 + uname -a 00:01:44.306 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:44.306 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:47.605 Hugepages 00:01:47.605 node hugesize free / total 00:01:47.605 node0 1048576kB 0 / 0 00:01:47.605 node0 2048kB 0 / 0 00:01:47.605 node1 1048576kB 0 / 0 00:01:47.605 node1 2048kB 0 / 0 00:01:47.605 00:01:47.605 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.605 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:47.605 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:47.605 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:47.605 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:47.605 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:47.605 + rm -f /tmp/spdk-ld-path 00:01:47.605 + source autorun-spdk.conf 00:01:47.605 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.605 ++ SPDK_TEST_NVMF=1 00:01:47.605 ++ SPDK_TEST_NVME_CLI=1 00:01:47.605 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.605 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.605 ++ SPDK_TEST_VFIOUSER=1 00:01:47.605 ++ SPDK_RUN_UBSAN=1 00:01:47.605 ++ NET_TYPE=phy 00:01:47.605 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.605 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.605 ++ RUN_NIGHTLY=1 00:01:47.605 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.605 + [[ -n '' ]] 00:01:47.605 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.605 + for M in /var/spdk/build-*-manifest.txt 00:01:47.605 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.605 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.605 + for M in /var/spdk/build-*-manifest.txt 00:01:47.605 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.605 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.605 ++ uname 00:01:47.605 + [[ Linux == \L\i\n\u\x ]] 00:01:47.605 + sudo dmesg -T 00:01:47.605 + sudo dmesg --clear 00:01:47.605 + dmesg_pid=2501233 00:01:47.605 + [[ Fedora Linux == FreeBSD ]] 00:01:47.605 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.605 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.605 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.605 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.605 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.605 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.606 + sudo dmesg -Tw 00:01:47.606 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.606 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.606 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.606 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.606 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.606 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.606 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.606 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.606 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.606 Test configuration: 00:01:47.606 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.606 SPDK_TEST_NVMF=1 00:01:47.606 SPDK_TEST_NVME_CLI=1 00:01:47.606 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.606 SPDK_TEST_NVMF_NICS=e810 00:01:47.606 SPDK_TEST_VFIOUSER=1 00:01:47.606 SPDK_RUN_UBSAN=1 00:01:47.606 NET_TYPE=phy 00:01:47.606 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.606 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.606 RUN_NIGHTLY=1 22:58:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.606 22:58:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.606 22:58:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.606 22:58:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.606 22:58:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.606 22:58:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.606 22:58:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.606 22:58:10 -- paths/export.sh@5 -- $ export PATH 00:01:47.606 22:58:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.606 22:58:10 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.606 22:58:10 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:47.606 22:58:10 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717793890.XXXXXX 00:01:47.606 22:58:10 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717793890.wRCnD3 00:01:47.606 22:58:10 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:47.606 22:58:10 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:47.606 22:58:10 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.606 22:58:10 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:47.606 22:58:10 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.606 22:58:10 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.606 22:58:10 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:47.606 22:58:10 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:47.606 22:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.606 22:58:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:47.606 22:58:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.606 22:58:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.606 22:58:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.606 22:58:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.606 Fri Jun 7 08:58:10 PM UTC 2024 00:01:47.606 22:58:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.606 LTS-43-g130b9406a 00:01:47.606 22:58:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:47.606 22:58:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.606 22:58:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.606 22:58:10 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:47.606 22:58:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:47.606 22:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.606 ************************************ 00:01:47.606 START TEST ubsan 00:01:47.606 ************************************ 00:01:47.606 22:58:10 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:47.606 using ubsan 00:01:47.606 00:01:47.606 real 0m0.000s 00:01:47.606 user 0m0.000s 00:01:47.606 sys 0m0.000s 00:01:47.606 22:58:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.606 22:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.606 ************************************ 00:01:47.606 END TEST ubsan 00:01:47.606 ************************************ 00:01:47.606 22:58:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:47.606 22:58:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:47.606 22:58:10 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:47.606 22:58:10 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:47.606 22:58:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:47.606 22:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.606 ************************************ 00:01:47.606 START TEST build_native_dpdk 00:01:47.606 ************************************ 00:01:47.606 22:58:10 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:47.606 22:58:10 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:47.606 22:58:10 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:47.606 22:58:10 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:47.606 22:58:10 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:47.606 22:58:10 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:47.606 22:58:10 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:47.606 22:58:10 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:47.606 22:58:10 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:47.606 22:58:10 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:47.606 22:58:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:47.606 22:58:10 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:47.606 22:58:10 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:47.606 22:58:10 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:47.606 22:58:10 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:47.607 22:58:10 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.607 22:58:10 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.607 22:58:10 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.607 22:58:10 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.607 22:58:10 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:47.607 eeb0605f11 version: 23.11.0 00:01:47.607 238778122a doc: update release notes for 23.11 00:01:47.607 46aa6b3cfc doc: fix description of RSS features 00:01:47.607 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:47.607 7e421ae345 devtools: support skipping forbid rule check 00:01:47.607 22:58:10 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:47.607 22:58:10 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:47.607 22:58:10 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:47.607 22:58:10 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:47.607 22:58:10 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:47.607 22:58:10 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:47.607 22:58:10 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:47.607 22:58:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:47.607 22:58:10 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.607 22:58:10 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:47.607 22:58:10 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:47.607 22:58:10 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:47.607 22:58:10 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:47.607 22:58:10 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:47.607 22:58:10 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:47.607 22:58:10 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:47.607 22:58:10 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:47.607 22:58:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:47.607 22:58:10 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:47.607 22:58:10 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:47.607 22:58:10 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:47.607 22:58:10 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:47.607 22:58:10 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:47.607 22:58:10 -- scripts/common.sh@343 -- $ case "$op" in 00:01:47.607 22:58:10 -- scripts/common.sh@344 -- $ : 1 00:01:47.607 22:58:10 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:47.607 22:58:10 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:47.607 22:58:10 -- scripts/common.sh@364 -- $ decimal 23 00:01:47.607 22:58:10 -- scripts/common.sh@352 -- $ local d=23 00:01:47.607 22:58:10 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:47.607 22:58:10 -- scripts/common.sh@354 -- $ echo 23 00:01:47.607 22:58:10 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:47.607 22:58:10 -- scripts/common.sh@365 -- $ decimal 21 00:01:47.607 22:58:10 -- scripts/common.sh@352 -- $ local d=21 00:01:47.607 22:58:10 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:47.607 22:58:10 -- scripts/common.sh@354 -- $ echo 21 00:01:47.607 22:58:10 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:47.607 22:58:10 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:47.607 22:58:10 -- scripts/common.sh@366 -- $ return 1 00:01:47.607 22:58:10 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:47.607 patching file config/rte_config.h 00:01:47.607 Hunk #1 succeeded at 60 (offset 1 line). 00:01:47.607 22:58:10 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:47.607 22:58:10 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:47.607 22:58:10 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:47.607 22:58:10 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:47.607 22:58:10 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:52.898 The Meson build system 00:01:52.899 Version: 1.3.1 00:01:52.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:52.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:52.899 Build type: native build 00:01:52.899 Program cat found: YES (/usr/bin/cat) 00:01:52.899 Project name: DPDK 00:01:52.899 Project version: 23.11.0 00:01:52.899 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.899 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:52.899 Host machine cpu family: x86_64 00:01:52.899 Host machine cpu: x86_64 00:01:52.899 Message: ## Building in Developer Mode ## 00:01:52.899 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.899 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:52.899 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.899 Program python3 found: YES (/usr/bin/python3) 00:01:52.899 Program cat found: YES (/usr/bin/cat) 00:01:52.899 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:52.899 Compiler for C supports arguments -march=native: YES 00:01:52.899 Checking for size of "void *" : 8 00:01:52.899 Checking for size of "void *" : 8 (cached) 00:01:52.899 Library m found: YES 00:01:52.899 Library numa found: YES 00:01:52.899 Has header "numaif.h" : YES 00:01:52.899 Library fdt found: NO 00:01:52.899 Library execinfo found: NO 00:01:52.899 Has header "execinfo.h" : YES 00:01:52.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.899 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.899 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.899 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.899 Run-time dependency openssl found: YES 3.0.9 00:01:52.899 Run-time dependency libpcap found: YES 1.10.4 00:01:52.899 Has header "pcap.h" with dependency libpcap: YES 00:01:52.899 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.899 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.899 Compiler for C supports arguments -Wformat: YES 00:01:52.899 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.899 Compiler for C supports arguments -Wformat-security: NO 00:01:52.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.899 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.899 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.899 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.899 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.899 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.899 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.899 Compiler for C supports arguments -Wundef: YES 00:01:52.899 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.899 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.899 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.899 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.899 Program objdump found: YES (/usr/bin/objdump) 00:01:52.899 Compiler for C supports arguments -mavx512f: YES 00:01:52.899 Checking if "AVX512 checking" compiles: YES 00:01:52.899 Fetching value of define "__SSE4_2__" : 1 00:01:52.899 Fetching value of define "__AES__" : 1 00:01:52.899 Fetching value of define "__AVX__" : 1 00:01:52.899 Fetching value of define "__AVX2__" : 1 00:01:52.899 Fetching value of define "__AVX512BW__" : 1 00:01:52.899 Fetching value of define "__AVX512CD__" : 1 00:01:52.899 Fetching value of define "__AVX512DQ__" : 1 00:01:52.899 Fetching value of define "__AVX512F__" : 1 00:01:52.899 Fetching value of define "__AVX512VL__" : 1 00:01:52.899 Fetching value of define "__PCLMUL__" : 1 00:01:52.899 Fetching value of define "__RDRND__" : 1 00:01:52.899 Fetching value of define "__RDSEED__" : 1 00:01:52.899 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:52.899 Fetching value of define "__znver1__" : (undefined) 00:01:52.899 Fetching value of define "__znver2__" : (undefined) 00:01:52.899 Fetching value of define "__znver3__" : (undefined) 00:01:52.899 Fetching value of define "__znver4__" : (undefined) 00:01:52.899 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.899 Message: lib/log: Defining dependency "log" 00:01:52.899 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.899 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.899 Checking for function "getentropy" : NO 00:01:52.899 Message: lib/eal: Defining dependency "eal" 00:01:52.899 Message: lib/ring: Defining dependency "ring" 00:01:52.899 Message: lib/rcu: Defining dependency "rcu" 00:01:52.899 Message: lib/mempool: Defining dependency "mempool" 00:01:52.899 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.899 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.899 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:52.899 Compiler for C supports arguments -mpclmul: YES 00:01:52.899 Compiler for C supports arguments -maes: YES 00:01:52.899 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.899 Compiler for C supports arguments -mavx512bw: YES 00:01:52.899 Compiler for C supports arguments -mavx512dq: YES 00:01:52.899 Compiler for C supports arguments -mavx512vl: YES 00:01:52.899 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.899 Compiler for C supports arguments -mavx2: YES 00:01:52.899 Compiler for C supports arguments -mavx: YES 00:01:52.899 Message: lib/net: Defining dependency "net" 00:01:52.899 Message: lib/meter: Defining dependency "meter" 00:01:52.899 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.899 Message: lib/pci: Defining dependency "pci" 00:01:52.899 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.899 Message: lib/metrics: Defining dependency "metrics" 00:01:52.899 Message: lib/hash: Defining dependency "hash" 00:01:52.899 Message: lib/timer: Defining dependency "timer" 00:01:52.899 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.899 Message: lib/acl: Defining dependency "acl" 00:01:52.899 Message: lib/bbdev: Defining dependency "bbdev" 00:01:52.899 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:52.899 Run-time dependency libelf found: YES 0.190 00:01:52.899 Message: lib/bpf: Defining dependency "bpf" 00:01:52.899 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:52.899 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.899 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.899 Message: lib/distributor: Defining dependency "distributor" 00:01:52.899 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.899 Message: lib/efd: Defining dependency "efd" 00:01:52.899 Message: lib/eventdev: Defining dependency "eventdev" 00:01:52.899 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:52.899 Message: lib/gpudev: Defining dependency "gpudev" 00:01:52.899 Message: lib/gro: Defining dependency "gro" 00:01:52.899 Message: lib/gso: Defining dependency "gso" 00:01:52.899 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:52.899 Message: lib/jobstats: Defining dependency "jobstats" 00:01:52.899 Message: lib/latencystats: Defining dependency "latencystats" 00:01:52.899 Message: lib/lpm: Defining dependency "lpm" 00:01:52.899 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512IFMA__" : 1 00:01:52.899 Message: lib/member: Defining dependency "member" 00:01:52.899 Message: lib/pcapng: Defining dependency "pcapng" 00:01:52.899 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.899 Message: lib/power: Defining dependency "power" 00:01:52.899 Message: lib/rawdev: Defining dependency "rawdev" 00:01:52.899 Message: lib/regexdev: Defining dependency "regexdev" 00:01:52.899 Message: lib/mldev: Defining dependency "mldev" 00:01:52.899 Message: lib/rib: Defining dependency "rib" 00:01:52.899 Message: lib/reorder: Defining dependency "reorder" 00:01:52.899 Message: lib/sched: Defining dependency "sched" 00:01:52.899 Message: lib/security: Defining dependency "security" 00:01:52.899 Message: lib/stack: Defining dependency "stack" 00:01:52.899 Has header "linux/userfaultfd.h" : YES 00:01:52.899 Has header "linux/vduse.h" : YES 00:01:52.899 Message: lib/vhost: Defining dependency "vhost" 00:01:52.899 Message: lib/ipsec: Defining dependency "ipsec" 00:01:52.899 Message: lib/pdcp: Defining dependency "pdcp" 00:01:52.899 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.899 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.899 Message: lib/fib: Defining dependency "fib" 00:01:52.899 Message: lib/port: Defining dependency "port" 00:01:52.899 Message: lib/pdump: Defining dependency "pdump" 00:01:52.899 Message: lib/table: Defining dependency "table" 00:01:52.899 Message: lib/pipeline: Defining dependency "pipeline" 00:01:52.899 Message: lib/graph: Defining dependency "graph" 00:01:52.899 Message: lib/node: Defining dependency "node" 00:01:52.899 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.899 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.899 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.842 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.842 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:53.842 Compiler for C supports arguments -Wno-unused-value: YES 00:01:53.842 Compiler for C supports arguments -Wno-format: YES 00:01:53.842 Compiler for C supports arguments -Wno-format-security: YES 00:01:53.842 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:53.842 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.842 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:53.842 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:53.842 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.842 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.842 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.842 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.842 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:53.842 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:53.842 Has header "sys/epoll.h" : YES 00:01:53.842 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.842 Configuring doxy-api-html.conf using configuration 00:01:53.842 Configuring doxy-api-man.conf using configuration 00:01:53.842 Program mandb found: YES (/usr/bin/mandb) 00:01:53.842 Program sphinx-build found: NO 00:01:53.842 Configuring rte_build_config.h using configuration 00:01:53.842 Message: 00:01:53.842 ================= 00:01:53.842 Applications Enabled 00:01:53.842 ================= 00:01:53.842 00:01:53.842 apps: 00:01:53.842 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:53.842 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:53.842 test-pmd, test-regex, test-sad, test-security-perf, 00:01:53.842 00:01:53.842 Message: 00:01:53.842 ================= 00:01:53.842 Libraries Enabled 00:01:53.842 ================= 00:01:53.842 00:01:53.842 libs: 00:01:53.842 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.842 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:53.842 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:53.842 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:53.842 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:53.842 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:53.842 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:53.842 00:01:53.842 00:01:53.842 Message: 00:01:53.842 =============== 00:01:53.842 Drivers Enabled 00:01:53.842 =============== 00:01:53.842 00:01:53.842 common: 00:01:53.842 00:01:53.842 bus: 00:01:53.842 pci, vdev, 00:01:53.842 mempool: 00:01:53.842 ring, 00:01:53.842 dma: 00:01:53.842 00:01:53.842 net: 00:01:53.842 i40e, 00:01:53.842 raw: 00:01:53.842 00:01:53.842 crypto: 00:01:53.842 00:01:53.842 compress: 00:01:53.842 00:01:53.842 regex: 00:01:53.842 00:01:53.842 ml: 00:01:53.842 00:01:53.842 vdpa: 00:01:53.842 00:01:53.842 event: 00:01:53.842 00:01:53.842 baseband: 00:01:53.842 00:01:53.842 gpu: 00:01:53.842 00:01:53.842 00:01:53.843 Message: 00:01:53.843 ================= 00:01:53.843 Content Skipped 00:01:53.843 ================= 00:01:53.843 00:01:53.843 apps: 00:01:53.843 00:01:53.843 libs: 00:01:53.843 00:01:53.843 drivers: 00:01:53.843 common/cpt: not in enabled drivers build config 00:01:53.843 common/dpaax: not in enabled drivers build config 00:01:53.843 common/iavf: not in enabled drivers build config 00:01:53.843 common/idpf: not in enabled drivers build config 00:01:53.843 common/mvep: not in enabled drivers build config 00:01:53.843 common/octeontx: not in enabled drivers build config 00:01:53.843 bus/auxiliary: not in enabled drivers build config 00:01:53.843 bus/cdx: not in enabled drivers build config 00:01:53.843 bus/dpaa: not in enabled drivers build config 00:01:53.843 bus/fslmc: not in enabled drivers build config 00:01:53.843 bus/ifpga: not in enabled drivers build config 00:01:53.843 bus/platform: not in enabled drivers build config 00:01:53.843 bus/vmbus: not in enabled drivers build config 00:01:53.843 common/cnxk: not in enabled drivers build config 00:01:53.843 common/mlx5: not in enabled drivers build config 00:01:53.843 common/nfp: not in enabled drivers build config 00:01:53.843 common/qat: not in enabled drivers build config 00:01:53.843 common/sfc_efx: not in enabled drivers build config 00:01:53.843 mempool/bucket: not in enabled drivers build config 00:01:53.843 mempool/cnxk: not in enabled drivers build config 00:01:53.843 mempool/dpaa: not in enabled drivers build config 00:01:53.843 mempool/dpaa2: not in enabled drivers build config 00:01:53.843 mempool/octeontx: not in enabled drivers build config 00:01:53.843 mempool/stack: not in enabled drivers build config 00:01:53.843 dma/cnxk: not in enabled drivers build config 00:01:53.843 dma/dpaa: not in enabled drivers build config 00:01:53.843 dma/dpaa2: not in enabled drivers build config 00:01:53.843 dma/hisilicon: not in enabled drivers build config 00:01:53.843 dma/idxd: not in enabled drivers build config 00:01:53.843 dma/ioat: not in enabled drivers build config 00:01:53.843 dma/skeleton: not in enabled drivers build config 00:01:53.843 net/af_packet: not in enabled drivers build config 00:01:53.843 net/af_xdp: not in enabled drivers build config 00:01:53.843 net/ark: not in enabled drivers build config 00:01:53.843 net/atlantic: not in enabled drivers build config 00:01:53.843 net/avp: not in enabled drivers build config 00:01:53.843 net/axgbe: not in enabled drivers build config 00:01:53.843 net/bnx2x: not in enabled drivers build config 00:01:53.843 net/bnxt: not in enabled drivers build config 00:01:53.843 net/bonding: not in enabled drivers build config 00:01:53.843 net/cnxk: not in enabled drivers build config 00:01:53.843 net/cpfl: not in enabled drivers build config 00:01:53.843 net/cxgbe: not in enabled drivers build config 00:01:53.843 net/dpaa: not in enabled drivers build config 00:01:53.843 net/dpaa2: not in enabled drivers build config 00:01:53.843 net/e1000: not in enabled drivers build config 00:01:53.843 net/ena: not in enabled drivers build config 00:01:53.843 net/enetc: not in enabled drivers build config 00:01:53.843 net/enetfec: not in enabled drivers build config 00:01:53.843 net/enic: not in enabled drivers build config 00:01:53.843 net/failsafe: not in enabled drivers build config 00:01:53.843 net/fm10k: not in enabled drivers build config 00:01:53.843 net/gve: not in enabled drivers build config 00:01:53.843 net/hinic: not in enabled drivers build config 00:01:53.843 net/hns3: not in enabled drivers build config 00:01:53.843 net/iavf: not in enabled drivers build config 00:01:53.843 net/ice: not in enabled drivers build config 00:01:53.843 net/idpf: not in enabled drivers build config 00:01:53.843 net/igc: not in enabled drivers build config 00:01:53.843 net/ionic: not in enabled drivers build config 00:01:53.843 net/ipn3ke: not in enabled drivers build config 00:01:53.843 net/ixgbe: not in enabled drivers build config 00:01:53.843 net/mana: not in enabled drivers build config 00:01:53.843 net/memif: not in enabled drivers build config 00:01:53.843 net/mlx4: not in enabled drivers build config 00:01:53.843 net/mlx5: not in enabled drivers build config 00:01:53.843 net/mvneta: not in enabled drivers build config 00:01:53.843 net/mvpp2: not in enabled drivers build config 00:01:53.843 net/netvsc: not in enabled drivers build config 00:01:53.843 net/nfb: not in enabled drivers build config 00:01:53.843 net/nfp: not in enabled drivers build config 00:01:53.843 net/ngbe: not in enabled drivers build config 00:01:53.843 net/null: not in enabled drivers build config 00:01:53.843 net/octeontx: not in enabled drivers build config 00:01:53.843 net/octeon_ep: not in enabled drivers build config 00:01:53.843 net/pcap: not in enabled drivers build config 00:01:53.843 net/pfe: not in enabled drivers build config 00:01:53.843 net/qede: not in enabled drivers build config 00:01:53.843 net/ring: not in enabled drivers build config 00:01:53.843 net/sfc: not in enabled drivers build config 00:01:53.843 net/softnic: not in enabled drivers build config 00:01:53.843 net/tap: not in enabled drivers build config 00:01:53.843 net/thunderx: not in enabled drivers build config 00:01:53.843 net/txgbe: not in enabled drivers build config 00:01:53.843 net/vdev_netvsc: not in enabled drivers build config 00:01:53.843 net/vhost: not in enabled drivers build config 00:01:53.843 net/virtio: not in enabled drivers build config 00:01:53.843 net/vmxnet3: not in enabled drivers build config 00:01:53.843 raw/cnxk_bphy: not in enabled drivers build config 00:01:53.843 raw/cnxk_gpio: not in enabled drivers build config 00:01:53.843 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:53.843 raw/ifpga: not in enabled drivers build config 00:01:53.843 raw/ntb: not in enabled drivers build config 00:01:53.843 raw/skeleton: not in enabled drivers build config 00:01:53.843 crypto/armv8: not in enabled drivers build config 00:01:53.843 crypto/bcmfs: not in enabled drivers build config 00:01:53.843 crypto/caam_jr: not in enabled drivers build config 00:01:53.843 crypto/ccp: not in enabled drivers build config 00:01:53.843 crypto/cnxk: not in enabled drivers build config 00:01:53.843 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.843 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.843 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.843 crypto/mlx5: not in enabled drivers build config 00:01:53.843 crypto/mvsam: not in enabled drivers build config 00:01:53.843 crypto/nitrox: not in enabled drivers build config 00:01:53.843 crypto/null: not in enabled drivers build config 00:01:53.843 crypto/octeontx: not in enabled drivers build config 00:01:53.843 crypto/openssl: not in enabled drivers build config 00:01:53.843 crypto/scheduler: not in enabled drivers build config 00:01:53.843 crypto/uadk: not in enabled drivers build config 00:01:53.843 crypto/virtio: not in enabled drivers build config 00:01:53.843 compress/isal: not in enabled drivers build config 00:01:53.843 compress/mlx5: not in enabled drivers build config 00:01:53.843 compress/octeontx: not in enabled drivers build config 00:01:53.843 compress/zlib: not in enabled drivers build config 00:01:53.843 regex/mlx5: not in enabled drivers build config 00:01:53.843 regex/cn9k: not in enabled drivers build config 00:01:53.843 ml/cnxk: not in enabled drivers build config 00:01:53.843 vdpa/ifc: not in enabled drivers build config 00:01:53.843 vdpa/mlx5: not in enabled drivers build config 00:01:53.843 vdpa/nfp: not in enabled drivers build config 00:01:53.843 vdpa/sfc: not in enabled drivers build config 00:01:53.843 event/cnxk: not in enabled drivers build config 00:01:53.843 event/dlb2: not in enabled drivers build config 00:01:53.843 event/dpaa: not in enabled drivers build config 00:01:53.843 event/dpaa2: not in enabled drivers build config 00:01:53.843 event/dsw: not in enabled drivers build config 00:01:53.843 event/opdl: not in enabled drivers build config 00:01:53.843 event/skeleton: not in enabled drivers build config 00:01:53.843 event/sw: not in enabled drivers build config 00:01:53.843 event/octeontx: not in enabled drivers build config 00:01:53.843 baseband/acc: not in enabled drivers build config 00:01:53.843 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:53.843 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:53.843 baseband/la12xx: not in enabled drivers build config 00:01:53.843 baseband/null: not in enabled drivers build config 00:01:53.843 baseband/turbo_sw: not in enabled drivers build config 00:01:53.843 gpu/cuda: not in enabled drivers build config 00:01:53.843 00:01:53.843 00:01:53.843 Build targets in project: 215 00:01:53.843 00:01:53.843 DPDK 23.11.0 00:01:53.843 00:01:53.843 User defined options 00:01:53.843 libdir : lib 00:01:53.843 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:53.843 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:53.843 c_link_args : 00:01:53.843 enable_docs : false 00:01:53.843 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:53.843 enable_kmods : false 00:01:53.843 machine : native 00:01:53.843 tests : false 00:01:53.843 00:01:53.843 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.843 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:53.843 22:58:16 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:53.843 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:54.110 [1/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.110 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.110 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.110 [4/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.110 [5/705] Linking static target lib/librte_kvargs.a 00:01:54.110 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.110 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.110 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.110 [9/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.110 [10/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.110 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.110 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.110 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.110 [14/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.110 [15/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.110 [16/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.110 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.110 [18/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.110 [19/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.371 [20/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.371 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.371 [22/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.371 [23/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.371 [24/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.371 [25/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.371 [26/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.371 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.371 [28/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.371 [29/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.371 [30/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.371 [31/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.371 [32/705] Linking static target lib/librte_log.a 00:01:54.371 [33/705] Linking static target lib/librte_pci.a 00:01:54.371 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.371 [35/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.371 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.630 [37/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.630 [38/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.630 [39/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:54.630 [40/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.630 [41/705] Linking static target lib/librte_cfgfile.a 00:01:54.630 [42/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.630 [43/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.630 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.630 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.630 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.630 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.630 [48/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.630 [49/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.630 [50/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.630 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.890 [52/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.890 [53/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.890 [54/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.890 [55/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:54.890 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.890 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.890 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.890 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.890 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.890 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.890 [62/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.890 [63/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:54.890 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.890 [65/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.890 [66/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.890 [67/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.890 [68/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.890 [69/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:54.890 [70/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.890 [71/705] Linking static target lib/librte_meter.a 00:01:54.890 [72/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.890 [73/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.890 [74/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.890 [75/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.890 [76/705] Linking static target lib/librte_ring.a 00:01:54.890 [77/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.890 [78/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.890 [79/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.890 [80/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:54.890 [81/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.890 [82/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:54.890 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.890 [84/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.890 [85/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:54.890 [86/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:54.890 [87/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:54.890 [88/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:54.890 [89/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.890 [90/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.890 [91/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:54.890 [92/705] Linking static target lib/librte_cmdline.a 00:01:54.890 [93/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.890 [94/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:54.890 [95/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:54.890 [96/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.890 [97/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.890 [98/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.890 [99/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.890 [100/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.890 [101/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.890 [102/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.890 [103/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.890 [104/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:54.890 [105/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.890 [106/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.890 [107/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.156 [108/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.156 [109/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.156 [110/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:55.156 [111/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:55.156 [112/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:55.156 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.156 [114/705] Linking static target lib/librte_metrics.a 00:01:55.156 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.156 [116/705] Linking static target lib/librte_bitratestats.a 00:01:55.156 [117/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.156 [118/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.156 [119/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:55.156 [120/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.156 [121/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:55.156 [122/705] Linking static target lib/librte_net.a 00:01:55.156 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.156 [124/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.156 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.156 [126/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.156 [127/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:55.156 [128/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:55.156 [129/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.156 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.156 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.156 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.156 [133/705] Linking target lib/librte_log.so.24.0 00:01:55.156 [134/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.156 [135/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:55.156 [136/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.156 [137/705] Linking static target lib/librte_compressdev.a 00:01:55.156 [138/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.156 [139/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.156 [140/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.156 [141/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.424 [142/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.424 [143/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.424 [144/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:55.424 [145/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.424 [146/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:55.424 [147/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:55.424 [148/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.424 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.424 [150/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.424 [151/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:55.424 [152/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.424 [153/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.424 [154/705] Linking static target lib/librte_timer.a 00:01:55.424 [155/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:55.424 [156/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:55.424 [157/705] Linking static target lib/librte_dispatcher.a 00:01:55.424 [158/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:55.424 [159/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:55.424 [160/705] Linking static target lib/librte_distributor.a 00:01:55.424 [161/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.424 [162/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:55.424 [163/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:55.424 [164/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.424 [165/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:55.424 [166/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.424 [167/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:55.424 [168/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:55.424 [169/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.424 [170/705] Linking target lib/librte_kvargs.so.24.0 00:01:55.424 [171/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:55.424 [172/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.424 [173/705] Linking static target lib/librte_bbdev.a 00:01:55.424 [174/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:55.424 [175/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:55.424 [176/705] Linking static target lib/librte_jobstats.a 00:01:55.424 [177/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.424 [178/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.424 [179/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:55.424 [180/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:55.424 [181/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:55.424 [182/705] Linking static target lib/librte_gpudev.a 00:01:55.424 [183/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:55.424 [184/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.424 [185/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:55.424 [186/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.424 [187/705] Linking static target lib/librte_gro.a 00:01:55.424 [188/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:55.424 [189/705] Linking static target lib/librte_dmadev.a 00:01:55.424 [190/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:55.686 [191/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.686 [192/705] Linking static target lib/librte_mempool.a 00:01:55.686 [193/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:55.686 [194/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:55.686 [195/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:55.686 [196/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:55.686 [197/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:55.686 [198/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:55.686 [199/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.686 [200/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.686 [201/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.686 [202/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:55.686 [203/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.686 [204/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:55.686 [205/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:55.686 [206/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.686 [207/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:55.686 [208/705] Linking static target lib/librte_gso.a 00:01:55.686 [209/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:55.686 [210/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:55.686 [211/705] Linking static target lib/librte_stack.a 00:01:55.686 [212/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.686 [213/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.686 [214/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:55.686 [215/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:55.686 [216/705] Linking static target lib/librte_latencystats.a 00:01:55.686 [217/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.686 [218/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:55.686 [219/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:55.686 [220/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:55.686 [221/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.686 [222/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:55.686 [223/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:55.686 [224/705] Linking static target lib/librte_telemetry.a 00:01:55.686 [225/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:55.686 [226/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.686 [227/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.686 [228/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:55.686 [229/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:55.686 [230/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.945 [231/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [232/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:55.945 [233/705] Linking static target lib/librte_regexdev.a 00:01:55.945 [234/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.945 [235/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:55.945 [236/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:55.945 [237/705] Linking static target lib/librte_eal.a 00:01:55.945 [238/705] Linking static target lib/librte_rcu.a 00:01:55.945 [239/705] Linking static target lib/librte_ip_frag.a 00:01:55.945 [240/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:55.945 [241/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:55.945 [242/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:55.945 [243/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:55.945 [244/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:55.945 [245/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:55.945 [246/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.945 [247/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:55.945 [248/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.945 [249/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.945 [250/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:55.945 [251/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [252/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.945 [253/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:55.945 [254/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.945 [255/705] Linking static target lib/librte_power.a 00:01:55.945 [256/705] Linking static target lib/librte_reorder.a 00:01:55.945 [257/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:55.945 [258/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:55.945 [259/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.945 [260/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [261/705] Linking static target lib/librte_bpf.a 00:01:55.945 [262/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.945 [263/705] Linking static target lib/librte_rawdev.a 00:01:55.945 [264/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.945 [265/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.945 [266/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:55.945 [267/705] Linking static target lib/librte_security.a 00:01:55.945 [268/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:55.945 [269/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:55.945 [270/705] Linking static target lib/librte_mldev.a 00:01:55.945 [271/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.945 [272/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [273/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [274/705] Linking static target lib/librte_pcapng.a 00:01:55.945 [275/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:55.945 [276/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [277/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:55.945 [278/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:55.945 [279/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.945 [280/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.205 [281/705] Linking static target lib/librte_mbuf.a 00:01:56.205 [282/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:56.205 [283/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:56.205 [284/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:56.205 [285/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.205 [286/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.205 [287/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:56.205 [288/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:56.205 [289/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.205 [290/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:56.205 [291/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:56.205 [292/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:56.205 [293/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:56.205 [294/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:56.205 [295/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:56.205 [296/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.205 [297/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:56.205 [298/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:56.205 [299/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:56.205 [300/705] Linking static target lib/librte_efd.a 00:01:56.205 [301/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:56.205 [302/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.205 [303/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:56.205 [304/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.205 [305/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:56.205 [306/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:56.205 [307/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:56.205 [308/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:56.205 [309/705] Linking static target lib/librte_lpm.a 00:01:56.205 [310/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:56.470 [311/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:56.470 [312/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:56.470 [313/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:56.470 [314/705] Linking static target lib/librte_rib.a 00:01:56.470 [315/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [316/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:56.470 [317/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:56.470 [318/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:56.470 [319/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:56.470 [320/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [321/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:56.470 [322/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:56.470 [323/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:56.470 [324/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [325/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [326/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [327/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:56.470 [328/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [329/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:56.470 [330/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:56.470 [331/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.470 [332/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:56.470 [333/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:56.470 [334/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:56.470 [335/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:56.470 [336/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:56.470 [337/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:56.470 [338/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:56.470 [339/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:56.470 [340/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:56.470 [341/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.470 [342/705] Linking target lib/librte_telemetry.so.24.0 00:01:56.470 [343/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:56.470 [344/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:56.470 [345/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:56.470 [346/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:56.470 [347/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:56.470 [348/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:56.734 [349/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:56.734 [350/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:56.734 [351/705] Linking static target lib/librte_fib.a 00:01:56.734 [352/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:56.734 [353/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:56.734 [354/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.734 [355/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:56.734 [356/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:56.734 [357/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:56.734 [358/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [359/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [360/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [361/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:56.734 [362/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [363/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:56.734 [364/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:56.734 [365/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:56.734 [366/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.734 [367/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:56.734 [368/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:56.734 [369/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.734 [370/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:56.734 [371/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:56.734 [372/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:56.734 [373/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:56.734 [374/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.734 [375/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:56.734 [376/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.734 [377/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:56.734 [378/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:56.734 [379/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.734 [380/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:56.734 [381/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:56.734 [382/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:56.993 [383/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:56.993 [384/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:56.993 [385/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:56.993 [386/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:56.993 [387/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:56.993 [388/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:56.993 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.993 [390/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:56.993 [391/705] Linking static target lib/librte_graph.a 00:01:56.993 [392/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.993 [393/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:56.993 [394/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:56.993 [395/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:56.993 [396/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.993 [397/705] Linking static target lib/librte_pdump.a 00:01:56.993 [398/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.993 [399/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:56.993 [400/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:56.993 [401/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.993 [402/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:56.993 [403/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:56.993 [404/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.993 [405/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:56.993 [406/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.993 [407/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:56.993 [408/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.993 [409/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:57.252 [410/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:57.252 [411/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:57.252 [412/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:57.252 [413/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:57.252 [414/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.252 [415/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [416/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:57.252 [417/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:57.252 [418/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:57.252 [419/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [420/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.252 [421/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.252 [422/705] Linking static target drivers/librte_bus_vdev.a 00:01:57.252 [423/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:57.252 [424/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:57.252 [425/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:57.252 [426/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:57.252 [427/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:57.252 [428/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:57.252 [429/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [430/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:57.252 [431/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:57.252 [432/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:57.252 [433/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:57.252 [434/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:57.252 [435/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:57.252 [436/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:57.252 [437/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.252 [438/705] Linking static target lib/librte_table.a 00:01:57.252 [439/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.252 [440/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.252 [441/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:57.252 [442/705] Linking static target lib/librte_cryptodev.a 00:01:57.252 [443/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:57.252 [444/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.252 [445/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:57.252 [446/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:57.252 [447/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:57.252 [448/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:57.252 [449/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:57.252 [450/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:57.252 [451/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:57.252 [452/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:57.252 [453/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:57.252 [454/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:57.252 [455/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.252 [456/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:57.252 [457/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.252 [458/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:57.252 [459/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:57.252 [460/705] Linking static target drivers/librte_bus_pci.a 00:01:57.512 [461/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:57.512 [462/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:57.512 [463/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:57.512 [464/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:57.512 [465/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:57.512 [466/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:57.512 [467/705] Linking static target lib/librte_sched.a 00:01:57.512 [468/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:57.512 [469/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:57.512 [470/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:57.512 [471/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.512 [472/705] Linking static target lib/librte_ipsec.a 00:01:57.512 [473/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:57.512 [474/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.512 [475/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:57.512 [476/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:57.512 [477/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.512 [478/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:57.512 [479/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:57.512 [480/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.512 [481/705] Linking static target drivers/librte_mempool_ring.a 00:01:57.512 [482/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:57.512 [483/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:57.512 [484/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:57.512 [485/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:57.512 [486/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:57.512 [487/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:57.512 [488/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:57.512 [489/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:57.512 [490/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:57.512 [491/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:57.512 [492/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:57.512 [493/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:57.512 [494/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:57.512 [495/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:57.512 [496/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:57.512 [497/705] Linking static target lib/librte_pdcp.a 00:01:57.512 [498/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:57.512 [499/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:57.512 [500/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:57.512 [501/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:57.773 [502/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:57.773 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:57.773 [504/705] Linking static target lib/librte_node.a 00:01:57.773 [505/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:57.773 [506/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:57.773 [507/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.773 [508/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:57.773 [509/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:57.773 [510/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:57.773 [511/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:57.773 [512/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:57.773 [513/705] Linking static target lib/acl/libavx2_tmp.a 00:01:57.773 [514/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:57.773 [515/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:57.773 [516/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:57.773 [517/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:57.773 [518/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:57.773 [519/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:57.773 [520/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:57.773 [521/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:57.773 [522/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:57.773 [523/705] Linking static target lib/librte_port.a 00:01:57.773 [524/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:57.773 [525/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:57.773 [526/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:57.773 [527/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:57.773 [528/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:57.773 [529/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:57.773 [530/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.773 [531/705] Linking static target lib/librte_member.a 00:01:57.773 [532/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:57.773 [533/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:57.773 [534/705] Linking static target lib/librte_hash.a 00:01:57.773 [535/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:57.773 [536/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:57.773 [537/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.773 [538/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:58.033 [539/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.033 [540/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:58.033 [541/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:58.033 [542/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.033 [543/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:58.033 [544/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:58.033 [545/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:58.033 [546/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:58.033 [547/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.033 [548/705] Linking static target lib/librte_eventdev.a 00:01:58.033 [549/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:58.033 [550/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:58.033 [551/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.033 [552/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:58.033 [553/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:58.033 [554/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:58.033 [555/705] Linking static target lib/librte_acl.a 00:01:58.033 [556/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.033 [557/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:58.033 [558/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:58.293 [559/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.293 [560/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:58.293 [561/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:58.293 [562/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.293 [563/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:58.293 [564/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:58.293 [565/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:58.553 [566/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:58.553 [567/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.553 [568/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.553 [569/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:58.553 [570/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.813 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.813 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:59.073 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.073 [574/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:59.073 [575/705] Linking static target lib/librte_ethdev.a 00:01:59.073 [576/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.333 [577/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:59.333 [578/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.902 [579/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:59.902 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:59.902 [581/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:59.902 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:59.902 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:59.902 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:59.902 [585/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:00.162 [586/705] Linking static target drivers/librte_net_i40e.a 00:02:01.101 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:01.101 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.670 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:01.670 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.865 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:05.865 [592/705] Linking static target lib/librte_pipeline.a 00:02:06.808 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.808 [594/705] Linking static target lib/librte_vhost.a 00:02:06.808 [595/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.808 [596/705] Linking target lib/librte_eal.so.24.0 00:02:06.808 [597/705] Linking target app/dpdk-pdump 00:02:07.069 [598/705] Linking target app/dpdk-graph 00:02:07.069 [599/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:07.069 [600/705] Linking target app/dpdk-test-gpudev 00:02:07.069 [601/705] Linking target app/dpdk-test-dma-perf 00:02:07.069 [602/705] Linking target app/dpdk-test-bbdev 00:02:07.069 [603/705] Linking target app/dpdk-test-eventdev 00:02:07.069 [604/705] Linking target lib/librte_pci.so.24.0 00:02:07.069 [605/705] Linking target lib/librte_ring.so.24.0 00:02:07.069 [606/705] Linking target lib/librte_meter.so.24.0 00:02:07.069 [607/705] Linking target lib/librte_stack.so.24.0 00:02:07.069 [608/705] Linking target lib/librte_timer.so.24.0 00:02:07.069 [609/705] Linking target lib/librte_cfgfile.so.24.0 00:02:07.069 [610/705] Linking target lib/librte_rawdev.so.24.0 00:02:07.069 [611/705] Linking target lib/librte_dmadev.so.24.0 00:02:07.069 [612/705] Linking target lib/librte_jobstats.so.24.0 00:02:07.069 [613/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:07.069 [614/705] Linking target app/dpdk-dumpcap 00:02:07.069 [615/705] Linking target lib/librte_acl.so.24.0 00:02:07.069 [616/705] Linking target app/dpdk-test-cmdline 00:02:07.069 [617/705] Linking target app/dpdk-test-acl 00:02:07.069 [618/705] Linking target app/dpdk-test-compress-perf 00:02:07.069 [619/705] Linking target app/dpdk-test-fib 00:02:07.069 [620/705] Linking target app/dpdk-proc-info 00:02:07.069 [621/705] Linking target app/dpdk-test-regex 00:02:07.069 [622/705] Linking target app/dpdk-test-crypto-perf 00:02:07.069 [623/705] Linking target app/dpdk-test-pipeline 00:02:07.069 [624/705] Linking target app/dpdk-test-security-perf 00:02:07.069 [625/705] Linking target app/dpdk-test-sad 00:02:07.069 [626/705] Linking target app/dpdk-test-mldev 00:02:07.069 [627/705] Linking target app/dpdk-test-flow-perf 00:02:07.069 [628/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:07.069 [629/705] Linking target app/dpdk-testpmd 00:02:07.069 [630/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:07.069 [631/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:07.069 [632/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:07.069 [633/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:07.069 [634/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:07.069 [635/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:07.069 [636/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:07.069 [637/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.069 [638/705] Linking target lib/librte_mempool.so.24.0 00:02:07.069 [639/705] Linking target lib/librte_rcu.so.24.0 00:02:07.330 [640/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:07.330 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:07.330 [642/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:07.330 [643/705] Linking target lib/librte_mbuf.so.24.0 00:02:07.330 [644/705] Linking target lib/librte_rib.so.24.0 00:02:07.330 [645/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:07.591 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:07.591 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:07.591 [648/705] Linking target lib/librte_distributor.so.24.0 00:02:07.591 [649/705] Linking target lib/librte_net.so.24.0 00:02:07.591 [650/705] Linking target lib/librte_bbdev.so.24.0 00:02:07.591 [651/705] Linking target lib/librte_compressdev.so.24.0 00:02:07.591 [652/705] Linking target lib/librte_gpudev.so.24.0 00:02:07.591 [653/705] Linking target lib/librte_cryptodev.so.24.0 00:02:07.591 [654/705] Linking target lib/librte_regexdev.so.24.0 00:02:07.591 [655/705] Linking target lib/librte_mldev.so.24.0 00:02:07.591 [656/705] Linking target lib/librte_reorder.so.24.0 00:02:07.591 [657/705] Linking target lib/librte_sched.so.24.0 00:02:07.591 [658/705] Linking target lib/librte_fib.so.24.0 00:02:07.591 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:07.591 [660/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:07.591 [661/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:07.591 [662/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:07.591 [663/705] Linking target lib/librte_ethdev.so.24.0 00:02:07.591 [664/705] Linking target lib/librte_cmdline.so.24.0 00:02:07.591 [665/705] Linking target lib/librte_hash.so.24.0 00:02:07.851 [666/705] Linking target lib/librte_security.so.24.0 00:02:07.852 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:07.852 [668/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:07.852 [669/705] Linking target lib/librte_efd.so.24.0 00:02:07.852 [670/705] Linking target lib/librte_lpm.so.24.0 00:02:07.852 [671/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:07.852 [672/705] Linking target lib/librte_member.so.24.0 00:02:07.852 [673/705] Linking target lib/librte_metrics.so.24.0 00:02:07.852 [674/705] Linking target lib/librte_pcapng.so.24.0 00:02:07.852 [675/705] Linking target lib/librte_gso.so.24.0 00:02:07.852 [676/705] Linking target lib/librte_gro.so.24.0 00:02:07.852 [677/705] Linking target lib/librte_bpf.so.24.0 00:02:07.852 [678/705] Linking target lib/librte_ip_frag.so.24.0 00:02:07.852 [679/705] Linking target lib/librte_power.so.24.0 00:02:07.852 [680/705] Linking target lib/librte_eventdev.so.24.0 00:02:07.852 [681/705] Linking target lib/librte_ipsec.so.24.0 00:02:07.852 [682/705] Linking target lib/librte_pdcp.so.24.0 00:02:07.852 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:08.112 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:08.112 [685/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:08.112 [686/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:08.112 [687/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:08.112 [688/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:08.112 [689/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:08.112 [690/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:08.112 [691/705] Linking target lib/librte_bitratestats.so.24.0 00:02:08.112 [692/705] Linking target lib/librte_latencystats.so.24.0 00:02:08.112 [693/705] Linking target lib/librte_dispatcher.so.24.0 00:02:08.112 [694/705] Linking target lib/librte_pdump.so.24.0 00:02:08.112 [695/705] Linking target lib/librte_graph.so.24.0 00:02:08.112 [696/705] Linking target lib/librte_port.so.24.0 00:02:08.112 [697/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:08.112 [698/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:08.373 [699/705] Linking target lib/librte_table.so.24.0 00:02:08.373 [700/705] Linking target lib/librte_node.so.24.0 00:02:08.373 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:08.633 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.634 [703/705] Linking target lib/librte_vhost.so.24.0 00:02:11.186 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.186 [705/705] Linking target lib/librte_pipeline.so.24.0 00:02:11.186 22:58:33 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:11.186 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:11.186 [0/1] Installing files. 00:02:11.186 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.186 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:11.191 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.192 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.192 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.192 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.506 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.506 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.506 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.506 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.506 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.510 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:11.510 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:11.510 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:11.510 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:11.511 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:11.511 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:11.511 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:11.511 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:11.511 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:11.511 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:11.511 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:11.511 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:11.511 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:11.511 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:11.511 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:11.511 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:11.511 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:11.511 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:11.511 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:11.511 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:11.511 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:11.511 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:11.511 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:11.511 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:11.511 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:11.511 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:11.511 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:11.511 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:11.511 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:11.511 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:11.511 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:11.511 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:11.511 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:11.511 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:11.511 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:11.511 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:11.511 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:11.511 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:11.511 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:11.511 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:11.511 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:11.511 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:11.511 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:11.511 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:11.511 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:11.511 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:11.511 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:11.511 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:11.511 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:11.511 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:11.511 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:11.511 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:11.511 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:11.511 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:11.511 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:11.511 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:11.511 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:11.511 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:11.511 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:11.511 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:11.511 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:11.511 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:11.511 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:11.511 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:11.511 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:11.511 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:11.511 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:11.511 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:11.511 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:11.511 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:11.511 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:11.511 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:11.511 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:11.511 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:11.511 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:11.511 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:11.511 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:11.511 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:11.511 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:11.511 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:11.511 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:11.511 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:11.511 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:11.511 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:11.511 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:11.511 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:11.511 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:11.511 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:11.511 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:11.511 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:11.511 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:11.511 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:11.511 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:11.511 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:11.511 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:11.512 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:11.512 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:11.512 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:11.512 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:11.512 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:11.512 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:11.512 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:11.512 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:11.512 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:11.512 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:11.512 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:11.512 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:11.512 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:11.512 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:11.512 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:11.512 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:11.512 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:11.512 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:11.512 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:11.512 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:11.512 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:11.512 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:11.512 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:11.512 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:11.512 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:11.512 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:11.512 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:11.512 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:11.512 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:11.512 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:11.512 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:11.512 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:11.512 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:11.512 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:11.512 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:11.512 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:11.512 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:11.512 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:11.512 22:58:33 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:11.512 22:58:33 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:11.512 22:58:33 -- common/autobuild_common.sh@200 -- $ cat 00:02:11.512 22:58:33 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.512 00:02:11.512 real 0m23.823s 00:02:11.512 user 7m7.148s 00:02:11.512 sys 3m15.555s 00:02:11.512 22:58:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.512 22:58:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.512 ************************************ 00:02:11.512 END TEST build_native_dpdk 00:02:11.512 ************************************ 00:02:11.512 22:58:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.512 22:58:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.512 22:58:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:11.512 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:11.798 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.798 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.798 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:12.058 Using 'verbs' RDMA provider 00:02:27.538 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:39.767 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:39.767 Creating mk/config.mk...done. 00:02:39.767 Creating mk/cc.flags.mk...done. 00:02:39.767 Type 'make' to build. 00:02:39.767 22:59:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:39.767 22:59:01 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:39.767 22:59:01 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:39.767 22:59:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.767 ************************************ 00:02:39.767 START TEST make 00:02:39.767 ************************************ 00:02:39.767 22:59:01 -- common/autotest_common.sh@1104 -- $ make -j144 00:02:39.767 make[1]: Nothing to be done for 'all'. 00:02:40.338 The Meson build system 00:02:40.338 Version: 1.3.1 00:02:40.338 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:40.338 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.338 Build type: native build 00:02:40.338 Project name: libvfio-user 00:02:40.338 Project version: 0.0.1 00:02:40.338 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.338 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:40.338 Host machine cpu family: x86_64 00:02:40.338 Host machine cpu: x86_64 00:02:40.338 Run-time dependency threads found: YES 00:02:40.338 Library dl found: YES 00:02:40.338 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.338 Run-time dependency json-c found: YES 0.17 00:02:40.338 Run-time dependency cmocka found: YES 1.1.7 00:02:40.338 Program pytest-3 found: NO 00:02:40.338 Program flake8 found: NO 00:02:40.338 Program misspell-fixer found: NO 00:02:40.338 Program restructuredtext-lint found: NO 00:02:40.338 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.338 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.338 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.338 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.338 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.338 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.338 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.338 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.338 Build targets in project: 8 00:02:40.338 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.338 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.338 00:02:40.338 libvfio-user 0.0.1 00:02:40.338 00:02:40.338 User defined options 00:02:40.338 buildtype : debug 00:02:40.338 default_library: shared 00:02:40.338 libdir : /usr/local/lib 00:02:40.338 00:02:40.338 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.908 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:40.908 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:40.908 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:40.908 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:40.908 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:40.908 [5/37] Compiling C object samples/null.p/null.c.o 00:02:40.908 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:40.908 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:40.908 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:40.908 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:40.908 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:40.908 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:40.908 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:40.908 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:40.908 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:40.908 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:40.908 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:40.908 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:40.908 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:40.908 [19/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:40.908 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:40.908 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:40.908 [22/37] Compiling C object samples/server.p/server.c.o 00:02:40.908 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:40.908 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:40.908 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:40.908 [26/37] Compiling C object samples/client.p/client.c.o 00:02:40.908 [27/37] Linking target samples/client 00:02:40.908 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:40.908 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:41.167 [30/37] Linking target test/unit_tests 00:02:41.167 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:41.167 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:41.167 [33/37] Linking target samples/server 00:02:41.167 [34/37] Linking target samples/lspci 00:02:41.167 [35/37] Linking target samples/null 00:02:41.167 [36/37] Linking target samples/gpio-pci-idio-16 00:02:41.167 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:41.167 INFO: autodetecting backend as ninja 00:02:41.167 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:41.429 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:41.690 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:41.690 ninja: no work to do. 00:02:49.829 CC lib/ut/ut.o 00:02:49.829 CC lib/log/log.o 00:02:49.829 CC lib/ut_mock/mock.o 00:02:49.829 CC lib/log/log_deprecated.o 00:02:49.829 CC lib/log/log_flags.o 00:02:49.829 LIB libspdk_ut.a 00:02:49.829 LIB libspdk_ut_mock.a 00:02:49.829 SO libspdk_ut.so.1.0 00:02:49.829 SO libspdk_ut_mock.so.5.0 00:02:49.829 LIB libspdk_log.a 00:02:49.829 SYMLINK libspdk_ut.so 00:02:49.829 SYMLINK libspdk_ut_mock.so 00:02:49.829 SO libspdk_log.so.6.1 00:02:49.829 SYMLINK libspdk_log.so 00:02:49.829 CC lib/util/base64.o 00:02:49.829 CC lib/dma/dma.o 00:02:49.829 CC lib/util/bit_array.o 00:02:49.829 CXX lib/trace_parser/trace.o 00:02:49.829 CC lib/util/cpuset.o 00:02:49.829 CC lib/util/crc16.o 00:02:49.829 CC lib/util/crc32.o 00:02:49.829 CC lib/ioat/ioat.o 00:02:49.829 CC lib/util/crc32c.o 00:02:49.829 CC lib/util/crc32_ieee.o 00:02:49.829 CC lib/util/crc64.o 00:02:49.829 CC lib/util/dif.o 00:02:49.829 CC lib/util/fd.o 00:02:49.829 CC lib/util/file.o 00:02:49.829 CC lib/util/hexlify.o 00:02:49.829 CC lib/util/iov.o 00:02:49.829 CC lib/util/math.o 00:02:49.829 CC lib/util/pipe.o 00:02:49.829 CC lib/util/strerror_tls.o 00:02:49.829 CC lib/util/string.o 00:02:49.829 CC lib/util/uuid.o 00:02:49.829 CC lib/util/fd_group.o 00:02:49.829 CC lib/util/xor.o 00:02:49.829 CC lib/util/zipf.o 00:02:49.829 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.829 CC lib/vfio_user/host/vfio_user.o 00:02:49.829 LIB libspdk_dma.a 00:02:49.829 SO libspdk_dma.so.3.0 00:02:49.829 LIB libspdk_ioat.a 00:02:49.829 SYMLINK libspdk_dma.so 00:02:49.829 SO libspdk_ioat.so.6.0 00:02:49.829 LIB libspdk_vfio_user.a 00:02:49.829 SYMLINK libspdk_ioat.so 00:02:49.829 SO libspdk_vfio_user.so.4.0 00:02:49.829 SYMLINK libspdk_vfio_user.so 00:02:49.829 LIB libspdk_util.a 00:02:49.829 SO libspdk_util.so.8.0 00:02:49.829 SYMLINK libspdk_util.so 00:02:49.829 LIB libspdk_trace_parser.a 00:02:49.829 SO libspdk_trace_parser.so.4.0 00:02:50.090 SYMLINK libspdk_trace_parser.so 00:02:50.090 CC lib/vmd/led.o 00:02:50.090 CC lib/vmd/vmd.o 00:02:50.090 CC lib/env_dpdk/env.o 00:02:50.090 CC lib/conf/conf.o 00:02:50.090 CC lib/env_dpdk/memory.o 00:02:50.090 CC lib/env_dpdk/pci.o 00:02:50.090 CC lib/env_dpdk/init.o 00:02:50.090 CC lib/env_dpdk/threads.o 00:02:50.090 CC lib/env_dpdk/pci_ioat.o 00:02:50.090 CC lib/idxd/idxd.o 00:02:50.090 CC lib/json/json_parse.o 00:02:50.090 CC lib/env_dpdk/pci_virtio.o 00:02:50.090 CC lib/idxd/idxd_user.o 00:02:50.090 CC lib/env_dpdk/pci_vmd.o 00:02:50.090 CC lib/json/json_util.o 00:02:50.090 CC lib/env_dpdk/pci_idxd.o 00:02:50.090 CC lib/idxd/idxd_kernel.o 00:02:50.090 CC lib/rdma/common.o 00:02:50.090 CC lib/json/json_write.o 00:02:50.090 CC lib/env_dpdk/pci_event.o 00:02:50.090 CC lib/rdma/rdma_verbs.o 00:02:50.090 CC lib/env_dpdk/pci_dpdk.o 00:02:50.090 CC lib/env_dpdk/sigbus_handler.o 00:02:50.090 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:50.090 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:50.351 LIB libspdk_conf.a 00:02:50.351 SO libspdk_conf.so.5.0 00:02:50.351 LIB libspdk_rdma.a 00:02:50.351 LIB libspdk_json.a 00:02:50.351 SO libspdk_rdma.so.5.0 00:02:50.351 SYMLINK libspdk_conf.so 00:02:50.351 SO libspdk_json.so.5.1 00:02:50.351 SYMLINK libspdk_rdma.so 00:02:50.351 SYMLINK libspdk_json.so 00:02:50.611 LIB libspdk_idxd.a 00:02:50.611 SO libspdk_idxd.so.11.0 00:02:50.611 LIB libspdk_vmd.a 00:02:50.611 SYMLINK libspdk_idxd.so 00:02:50.611 SO libspdk_vmd.so.5.0 00:02:50.611 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.611 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.611 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.611 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.611 SYMLINK libspdk_vmd.so 00:02:50.871 LIB libspdk_jsonrpc.a 00:02:51.131 SO libspdk_jsonrpc.so.5.1 00:02:51.131 SYMLINK libspdk_jsonrpc.so 00:02:51.131 LIB libspdk_env_dpdk.a 00:02:51.391 SO libspdk_env_dpdk.so.13.0 00:02:51.391 CC lib/rpc/rpc.o 00:02:51.391 SYMLINK libspdk_env_dpdk.so 00:02:51.392 LIB libspdk_rpc.a 00:02:51.392 SO libspdk_rpc.so.5.0 00:02:51.653 SYMLINK libspdk_rpc.so 00:02:51.653 CC lib/sock/sock.o 00:02:51.653 CC lib/sock/sock_rpc.o 00:02:51.913 CC lib/trace/trace.o 00:02:51.913 CC lib/trace/trace_flags.o 00:02:51.913 CC lib/trace/trace_rpc.o 00:02:51.913 CC lib/notify/notify.o 00:02:51.913 CC lib/notify/notify_rpc.o 00:02:51.913 LIB libspdk_notify.a 00:02:51.913 SO libspdk_notify.so.5.0 00:02:51.913 LIB libspdk_trace.a 00:02:52.174 SO libspdk_trace.so.9.0 00:02:52.174 SYMLINK libspdk_notify.so 00:02:52.174 LIB libspdk_sock.a 00:02:52.174 SO libspdk_sock.so.8.0 00:02:52.174 SYMLINK libspdk_trace.so 00:02:52.174 SYMLINK libspdk_sock.so 00:02:52.435 CC lib/thread/iobuf.o 00:02:52.435 CC lib/thread/thread.o 00:02:52.435 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.435 CC lib/nvme/nvme_ctrlr.o 00:02:52.435 CC lib/nvme/nvme_fabric.o 00:02:52.435 CC lib/nvme/nvme_ns_cmd.o 00:02:52.435 CC lib/nvme/nvme_ns.o 00:02:52.435 CC lib/nvme/nvme_pcie_common.o 00:02:52.435 CC lib/nvme/nvme_pcie.o 00:02:52.435 CC lib/nvme/nvme_qpair.o 00:02:52.435 CC lib/nvme/nvme.o 00:02:52.435 CC lib/nvme/nvme_quirks.o 00:02:52.435 CC lib/nvme/nvme_transport.o 00:02:52.435 CC lib/nvme/nvme_discovery.o 00:02:52.435 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.435 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.435 CC lib/nvme/nvme_tcp.o 00:02:52.435 CC lib/nvme/nvme_opal.o 00:02:52.435 CC lib/nvme/nvme_io_msg.o 00:02:52.435 CC lib/nvme/nvme_poll_group.o 00:02:52.435 CC lib/nvme/nvme_zns.o 00:02:52.435 CC lib/nvme/nvme_cuse.o 00:02:52.435 CC lib/nvme/nvme_vfio_user.o 00:02:52.435 CC lib/nvme/nvme_rdma.o 00:02:53.821 LIB libspdk_thread.a 00:02:53.821 SO libspdk_thread.so.9.0 00:02:53.821 SYMLINK libspdk_thread.so 00:02:53.821 CC lib/blob/blobstore.o 00:02:53.821 CC lib/blob/request.o 00:02:53.821 CC lib/blob/zeroes.o 00:02:53.821 CC lib/blob/blob_bs_dev.o 00:02:54.082 CC lib/virtio/virtio.o 00:02:54.082 CC lib/accel/accel.o 00:02:54.082 CC lib/virtio/virtio_vhost_user.o 00:02:54.082 CC lib/accel/accel_rpc.o 00:02:54.082 CC lib/virtio/virtio_vfio_user.o 00:02:54.082 CC lib/init/json_config.o 00:02:54.082 CC lib/virtio/virtio_pci.o 00:02:54.082 CC lib/accel/accel_sw.o 00:02:54.082 CC lib/init/subsystem.o 00:02:54.082 CC lib/init/subsystem_rpc.o 00:02:54.082 CC lib/init/rpc.o 00:02:54.082 CC lib/vfu_tgt/tgt_endpoint.o 00:02:54.082 CC lib/vfu_tgt/tgt_rpc.o 00:02:54.082 LIB libspdk_nvme.a 00:02:54.342 LIB libspdk_init.a 00:02:54.342 SO libspdk_init.so.4.0 00:02:54.342 LIB libspdk_virtio.a 00:02:54.342 LIB libspdk_vfu_tgt.a 00:02:54.342 SO libspdk_virtio.so.6.0 00:02:54.342 SO libspdk_nvme.so.12.0 00:02:54.342 SYMLINK libspdk_init.so 00:02:54.342 SO libspdk_vfu_tgt.so.2.0 00:02:54.342 SYMLINK libspdk_virtio.so 00:02:54.342 SYMLINK libspdk_vfu_tgt.so 00:02:54.602 CC lib/event/app.o 00:02:54.603 CC lib/event/reactor.o 00:02:54.603 CC lib/event/log_rpc.o 00:02:54.603 CC lib/event/app_rpc.o 00:02:54.603 CC lib/event/scheduler_static.o 00:02:54.603 SYMLINK libspdk_nvme.so 00:02:54.864 LIB libspdk_accel.a 00:02:54.864 SO libspdk_accel.so.14.0 00:02:54.864 LIB libspdk_event.a 00:02:54.864 SYMLINK libspdk_accel.so 00:02:55.124 SO libspdk_event.so.12.0 00:02:55.124 SYMLINK libspdk_event.so 00:02:55.124 CC lib/bdev/bdev.o 00:02:55.124 CC lib/bdev/bdev_rpc.o 00:02:55.124 CC lib/bdev/bdev_zone.o 00:02:55.124 CC lib/bdev/part.o 00:02:55.124 CC lib/bdev/scsi_nvme.o 00:02:56.066 LIB libspdk_blob.a 00:02:56.326 SO libspdk_blob.so.10.1 00:02:56.326 SYMLINK libspdk_blob.so 00:02:56.587 CC lib/lvol/lvol.o 00:02:56.587 CC lib/blobfs/blobfs.o 00:02:56.587 CC lib/blobfs/tree.o 00:02:57.159 LIB libspdk_blobfs.a 00:02:57.159 LIB libspdk_lvol.a 00:02:57.419 SO libspdk_blobfs.so.9.0 00:02:57.419 SO libspdk_lvol.so.9.1 00:02:57.419 LIB libspdk_bdev.a 00:02:57.419 SYMLINK libspdk_blobfs.so 00:02:57.419 SYMLINK libspdk_lvol.so 00:02:57.419 SO libspdk_bdev.so.14.0 00:02:57.419 SYMLINK libspdk_bdev.so 00:02:57.679 CC lib/ublk/ublk.o 00:02:57.679 CC lib/ublk/ublk_rpc.o 00:02:57.679 CC lib/scsi/dev.o 00:02:57.679 CC lib/scsi/lun.o 00:02:57.679 CC lib/scsi/port.o 00:02:57.679 CC lib/scsi/scsi.o 00:02:57.679 CC lib/nvmf/ctrlr.o 00:02:57.679 CC lib/scsi/scsi_bdev.o 00:02:57.679 CC lib/nvmf/ctrlr_discovery.o 00:02:57.679 CC lib/scsi/scsi_pr.o 00:02:57.679 CC lib/nbd/nbd.o 00:02:57.679 CC lib/scsi/task.o 00:02:57.679 CC lib/nvmf/ctrlr_bdev.o 00:02:57.679 CC lib/scsi/scsi_rpc.o 00:02:57.679 CC lib/ftl/ftl_core.o 00:02:57.679 CC lib/nbd/nbd_rpc.o 00:02:57.679 CC lib/nvmf/subsystem.o 00:02:57.679 CC lib/ftl/ftl_init.o 00:02:57.679 CC lib/nvmf/nvmf.o 00:02:57.679 CC lib/ftl/ftl_layout.o 00:02:57.679 CC lib/nvmf/nvmf_rpc.o 00:02:57.679 CC lib/ftl/ftl_debug.o 00:02:57.679 CC lib/nvmf/transport.o 00:02:57.679 CC lib/nvmf/tcp.o 00:02:57.679 CC lib/ftl/ftl_io.o 00:02:57.679 CC lib/nvmf/rdma.o 00:02:57.679 CC lib/nvmf/vfio_user.o 00:02:57.679 CC lib/ftl/ftl_sb.o 00:02:57.679 CC lib/ftl/ftl_l2p.o 00:02:57.679 CC lib/ftl/ftl_l2p_flat.o 00:02:57.679 CC lib/ftl/ftl_nv_cache.o 00:02:57.679 CC lib/ftl/ftl_band.o 00:02:57.679 CC lib/ftl/ftl_band_ops.o 00:02:57.679 CC lib/ftl/ftl_writer.o 00:02:57.679 CC lib/ftl/ftl_rq.o 00:02:57.679 CC lib/ftl/ftl_reloc.o 00:02:57.679 CC lib/ftl/ftl_l2p_cache.o 00:02:57.679 CC lib/ftl/ftl_p2l.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:57.679 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:57.679 CC lib/ftl/utils/ftl_conf.o 00:02:57.679 CC lib/ftl/utils/ftl_md.o 00:02:57.679 CC lib/ftl/utils/ftl_mempool.o 00:02:57.679 CC lib/ftl/utils/ftl_bitmap.o 00:02:57.679 CC lib/ftl/utils/ftl_property.o 00:02:57.679 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:57.679 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:57.679 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.679 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.679 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:57.938 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:57.938 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:57.938 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:57.938 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:57.938 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:57.938 CC lib/ftl/base/ftl_base_dev.o 00:02:57.938 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.938 CC lib/ftl/ftl_trace.o 00:02:58.197 LIB libspdk_nbd.a 00:02:58.197 SO libspdk_nbd.so.6.0 00:02:58.458 LIB libspdk_scsi.a 00:02:58.458 SYMLINK libspdk_nbd.so 00:02:58.458 SO libspdk_scsi.so.8.0 00:02:58.458 LIB libspdk_ublk.a 00:02:58.458 SO libspdk_ublk.so.2.0 00:02:58.458 SYMLINK libspdk_scsi.so 00:02:58.458 SYMLINK libspdk_ublk.so 00:02:58.719 LIB libspdk_ftl.a 00:02:58.719 CC lib/iscsi/conn.o 00:02:58.719 CC lib/iscsi/init_grp.o 00:02:58.719 CC lib/iscsi/iscsi.o 00:02:58.719 CC lib/iscsi/md5.o 00:02:58.719 CC lib/iscsi/param.o 00:02:58.719 CC lib/iscsi/portal_grp.o 00:02:58.719 CC lib/iscsi/tgt_node.o 00:02:58.719 CC lib/iscsi/iscsi_subsystem.o 00:02:58.719 CC lib/vhost/vhost.o 00:02:58.719 CC lib/iscsi/iscsi_rpc.o 00:02:58.719 CC lib/vhost/vhost_rpc.o 00:02:58.719 CC lib/iscsi/task.o 00:02:58.719 CC lib/vhost/vhost_scsi.o 00:02:58.719 CC lib/vhost/rte_vhost_user.o 00:02:58.719 CC lib/vhost/vhost_blk.o 00:02:58.719 SO libspdk_ftl.so.8.0 00:02:59.290 SYMLINK libspdk_ftl.so 00:02:59.551 LIB libspdk_nvmf.a 00:02:59.551 SO libspdk_nvmf.so.17.0 00:02:59.551 LIB libspdk_vhost.a 00:02:59.813 SO libspdk_vhost.so.7.1 00:02:59.813 SYMLINK libspdk_vhost.so 00:02:59.813 SYMLINK libspdk_nvmf.so 00:02:59.813 LIB libspdk_iscsi.a 00:03:00.074 SO libspdk_iscsi.so.7.0 00:03:00.074 SYMLINK libspdk_iscsi.so 00:03:00.335 CC module/env_dpdk/env_dpdk_rpc.o 00:03:00.595 CC module/vfu_device/vfu_virtio.o 00:03:00.595 CC module/vfu_device/vfu_virtio_blk.o 00:03:00.595 CC module/vfu_device/vfu_virtio_scsi.o 00:03:00.595 CC module/vfu_device/vfu_virtio_rpc.o 00:03:00.595 CC module/accel/error/accel_error.o 00:03:00.595 CC module/accel/error/accel_error_rpc.o 00:03:00.595 CC module/accel/iaa/accel_iaa.o 00:03:00.596 CC module/accel/iaa/accel_iaa_rpc.o 00:03:00.596 CC module/scheduler/gscheduler/gscheduler.o 00:03:00.596 CC module/accel/ioat/accel_ioat.o 00:03:00.596 CC module/accel/ioat/accel_ioat_rpc.o 00:03:00.596 CC module/blob/bdev/blob_bdev.o 00:03:00.596 CC module/sock/posix/posix.o 00:03:00.596 CC module/accel/dsa/accel_dsa.o 00:03:00.596 CC module/accel/dsa/accel_dsa_rpc.o 00:03:00.596 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.596 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:00.596 LIB libspdk_env_dpdk_rpc.a 00:03:00.596 SO libspdk_env_dpdk_rpc.so.5.0 00:03:00.857 LIB libspdk_scheduler_gscheduler.a 00:03:00.857 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.857 LIB libspdk_accel_iaa.a 00:03:00.857 SO libspdk_scheduler_gscheduler.so.3.0 00:03:00.857 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.857 LIB libspdk_accel_error.a 00:03:00.857 LIB libspdk_accel_dsa.a 00:03:00.857 SO libspdk_accel_iaa.so.2.0 00:03:00.857 LIB libspdk_accel_ioat.a 00:03:00.857 LIB libspdk_scheduler_dynamic.a 00:03:00.857 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:00.857 SO libspdk_accel_error.so.1.0 00:03:00.857 SO libspdk_accel_dsa.so.4.0 00:03:00.857 SYMLINK libspdk_scheduler_gscheduler.so 00:03:00.857 SO libspdk_accel_ioat.so.5.0 00:03:00.857 SO libspdk_scheduler_dynamic.so.3.0 00:03:00.857 SYMLINK libspdk_accel_iaa.so 00:03:00.857 LIB libspdk_blob_bdev.a 00:03:00.857 SYMLINK libspdk_accel_error.so 00:03:00.857 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:00.857 SYMLINK libspdk_accel_dsa.so 00:03:00.857 SO libspdk_blob_bdev.so.10.1 00:03:00.857 SYMLINK libspdk_accel_ioat.so 00:03:00.857 SYMLINK libspdk_scheduler_dynamic.so 00:03:00.857 SYMLINK libspdk_blob_bdev.so 00:03:01.117 LIB libspdk_vfu_device.a 00:03:01.117 SO libspdk_vfu_device.so.2.0 00:03:01.117 SYMLINK libspdk_vfu_device.so 00:03:01.117 LIB libspdk_sock_posix.a 00:03:01.377 SO libspdk_sock_posix.so.5.0 00:03:01.377 CC module/bdev/null/bdev_null.o 00:03:01.377 CC module/bdev/null/bdev_null_rpc.o 00:03:01.377 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:01.377 CC module/bdev/passthru/vbdev_passthru.o 00:03:01.377 CC module/bdev/gpt/gpt.o 00:03:01.377 CC module/bdev/gpt/vbdev_gpt.o 00:03:01.377 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:01.377 CC module/blobfs/bdev/blobfs_bdev.o 00:03:01.377 CC module/bdev/aio/bdev_aio.o 00:03:01.377 CC module/bdev/delay/vbdev_delay.o 00:03:01.377 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:01.377 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:01.377 CC module/bdev/malloc/bdev_malloc.o 00:03:01.377 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:01.377 CC module/bdev/aio/bdev_aio_rpc.o 00:03:01.377 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:01.377 CC module/bdev/lvol/vbdev_lvol.o 00:03:01.377 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.377 CC module/bdev/error/vbdev_error.o 00:03:01.377 CC module/bdev/iscsi/bdev_iscsi.o 00:03:01.377 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:01.377 CC module/bdev/error/vbdev_error_rpc.o 00:03:01.377 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:01.377 CC module/bdev/split/vbdev_split.o 00:03:01.377 CC module/bdev/split/vbdev_split_rpc.o 00:03:01.377 CC module/bdev/raid/bdev_raid.o 00:03:01.377 CC module/bdev/raid/bdev_raid_rpc.o 00:03:01.377 CC module/bdev/raid/raid0.o 00:03:01.377 CC module/bdev/nvme/bdev_nvme.o 00:03:01.377 CC module/bdev/raid/bdev_raid_sb.o 00:03:01.377 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:01.377 CC module/bdev/raid/raid1.o 00:03:01.377 CC module/bdev/nvme/nvme_rpc.o 00:03:01.377 CC module/bdev/ftl/bdev_ftl.o 00:03:01.377 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:01.377 CC module/bdev/nvme/bdev_mdns_client.o 00:03:01.377 CC module/bdev/raid/concat.o 00:03:01.377 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:01.377 CC module/bdev/nvme/vbdev_opal.o 00:03:01.377 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:01.377 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:01.377 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:01.377 SYMLINK libspdk_sock_posix.so 00:03:01.637 LIB libspdk_blobfs_bdev.a 00:03:01.637 LIB libspdk_bdev_null.a 00:03:01.637 SO libspdk_blobfs_bdev.so.5.0 00:03:01.637 LIB libspdk_bdev_split.a 00:03:01.637 SO libspdk_bdev_null.so.5.0 00:03:01.637 LIB libspdk_bdev_passthru.a 00:03:01.637 LIB libspdk_bdev_gpt.a 00:03:01.637 LIB libspdk_bdev_error.a 00:03:01.637 SO libspdk_bdev_split.so.5.0 00:03:01.637 SO libspdk_bdev_passthru.so.5.0 00:03:01.637 SO libspdk_bdev_gpt.so.5.0 00:03:01.637 SYMLINK libspdk_bdev_null.so 00:03:01.637 LIB libspdk_bdev_ftl.a 00:03:01.637 SO libspdk_bdev_error.so.5.0 00:03:01.638 LIB libspdk_bdev_aio.a 00:03:01.638 SYMLINK libspdk_blobfs_bdev.so 00:03:01.638 LIB libspdk_bdev_zone_block.a 00:03:01.638 SYMLINK libspdk_bdev_split.so 00:03:01.638 SO libspdk_bdev_aio.so.5.0 00:03:01.638 SYMLINK libspdk_bdev_error.so 00:03:01.638 LIB libspdk_bdev_iscsi.a 00:03:01.638 SO libspdk_bdev_ftl.so.5.0 00:03:01.638 SYMLINK libspdk_bdev_gpt.so 00:03:01.638 SO libspdk_bdev_zone_block.so.5.0 00:03:01.638 LIB libspdk_bdev_malloc.a 00:03:01.638 SYMLINK libspdk_bdev_passthru.so 00:03:01.638 LIB libspdk_bdev_delay.a 00:03:01.899 SO libspdk_bdev_delay.so.5.0 00:03:01.899 SO libspdk_bdev_iscsi.so.5.0 00:03:01.899 SO libspdk_bdev_malloc.so.5.0 00:03:01.899 SYMLINK libspdk_bdev_aio.so 00:03:01.899 SYMLINK libspdk_bdev_zone_block.so 00:03:01.899 SYMLINK libspdk_bdev_ftl.so 00:03:01.899 LIB libspdk_bdev_lvol.a 00:03:01.899 LIB libspdk_bdev_virtio.a 00:03:01.899 SYMLINK libspdk_bdev_delay.so 00:03:01.899 SYMLINK libspdk_bdev_iscsi.so 00:03:01.899 SYMLINK libspdk_bdev_malloc.so 00:03:01.899 SO libspdk_bdev_lvol.so.5.0 00:03:01.899 SO libspdk_bdev_virtio.so.5.0 00:03:01.899 SYMLINK libspdk_bdev_lvol.so 00:03:01.899 SYMLINK libspdk_bdev_virtio.so 00:03:02.161 LIB libspdk_bdev_raid.a 00:03:02.161 SO libspdk_bdev_raid.so.5.0 00:03:02.480 SYMLINK libspdk_bdev_raid.so 00:03:03.111 LIB libspdk_bdev_nvme.a 00:03:03.111 SO libspdk_bdev_nvme.so.6.0 00:03:03.371 SYMLINK libspdk_bdev_nvme.so 00:03:03.943 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:03.943 CC module/event/subsystems/vmd/vmd.o 00:03:03.943 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.943 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.943 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.943 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.943 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.943 CC module/event/subsystems/sock/sock.o 00:03:03.943 LIB libspdk_event_vfu_tgt.a 00:03:03.943 LIB libspdk_event_vmd.a 00:03:03.943 LIB libspdk_event_sock.a 00:03:03.943 LIB libspdk_event_vhost_blk.a 00:03:03.943 LIB libspdk_event_scheduler.a 00:03:03.943 LIB libspdk_event_iobuf.a 00:03:03.943 SO libspdk_event_vfu_tgt.so.2.0 00:03:03.943 SO libspdk_event_vhost_blk.so.2.0 00:03:03.943 SO libspdk_event_sock.so.4.0 00:03:03.943 SO libspdk_event_vmd.so.5.0 00:03:03.943 SO libspdk_event_scheduler.so.3.0 00:03:03.943 SO libspdk_event_iobuf.so.2.0 00:03:03.943 SYMLINK libspdk_event_vfu_tgt.so 00:03:03.943 SYMLINK libspdk_event_sock.so 00:03:03.943 SYMLINK libspdk_event_vmd.so 00:03:04.203 SYMLINK libspdk_event_vhost_blk.so 00:03:04.203 SYMLINK libspdk_event_scheduler.so 00:03:04.203 SYMLINK libspdk_event_iobuf.so 00:03:04.203 CC module/event/subsystems/accel/accel.o 00:03:04.463 LIB libspdk_event_accel.a 00:03:04.463 SO libspdk_event_accel.so.5.0 00:03:04.463 SYMLINK libspdk_event_accel.so 00:03:04.724 CC module/event/subsystems/bdev/bdev.o 00:03:04.984 LIB libspdk_event_bdev.a 00:03:04.984 SO libspdk_event_bdev.so.5.0 00:03:04.984 SYMLINK libspdk_event_bdev.so 00:03:05.245 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.245 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.245 CC module/event/subsystems/scsi/scsi.o 00:03:05.245 CC module/event/subsystems/ublk/ublk.o 00:03:05.245 CC module/event/subsystems/nbd/nbd.o 00:03:05.506 LIB libspdk_event_ublk.a 00:03:05.506 LIB libspdk_event_scsi.a 00:03:05.506 LIB libspdk_event_nbd.a 00:03:05.506 SO libspdk_event_ublk.so.2.0 00:03:05.506 SO libspdk_event_scsi.so.5.0 00:03:05.506 LIB libspdk_event_nvmf.a 00:03:05.506 SO libspdk_event_nbd.so.5.0 00:03:05.506 SO libspdk_event_nvmf.so.5.0 00:03:05.506 SYMLINK libspdk_event_ublk.so 00:03:05.506 SYMLINK libspdk_event_scsi.so 00:03:05.506 SYMLINK libspdk_event_nbd.so 00:03:05.767 SYMLINK libspdk_event_nvmf.so 00:03:05.767 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.767 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.028 LIB libspdk_event_vhost_scsi.a 00:03:06.028 SO libspdk_event_vhost_scsi.so.2.0 00:03:06.028 LIB libspdk_event_iscsi.a 00:03:06.028 SO libspdk_event_iscsi.so.5.0 00:03:06.028 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.028 SYMLINK libspdk_event_iscsi.so 00:03:06.289 SO libspdk.so.5.0 00:03:06.289 SYMLINK libspdk.so 00:03:06.549 CC app/trace_record/trace_record.o 00:03:06.549 CXX app/trace/trace.o 00:03:06.549 CC app/spdk_nvme_perf/perf.o 00:03:06.549 CC app/spdk_lspci/spdk_lspci.o 00:03:06.549 TEST_HEADER include/spdk/accel_module.h 00:03:06.549 TEST_HEADER include/spdk/accel.h 00:03:06.549 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.549 TEST_HEADER include/spdk/bdev.h 00:03:06.549 TEST_HEADER include/spdk/barrier.h 00:03:06.549 CC test/rpc_client/rpc_client_test.o 00:03:06.549 TEST_HEADER include/spdk/base64.h 00:03:06.549 TEST_HEADER include/spdk/bdev_module.h 00:03:06.549 TEST_HEADER include/spdk/assert.h 00:03:06.549 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.549 TEST_HEADER include/spdk/bit_array.h 00:03:06.549 TEST_HEADER include/spdk/bit_pool.h 00:03:06.549 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.549 CC app/spdk_nvme_identify/identify.o 00:03:06.549 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.549 TEST_HEADER include/spdk/blob.h 00:03:06.549 TEST_HEADER include/spdk/blobfs.h 00:03:06.549 TEST_HEADER include/spdk/config.h 00:03:06.549 TEST_HEADER include/spdk/cpuset.h 00:03:06.549 TEST_HEADER include/spdk/conf.h 00:03:06.549 TEST_HEADER include/spdk/crc16.h 00:03:06.549 TEST_HEADER include/spdk/crc32.h 00:03:06.549 TEST_HEADER include/spdk/crc64.h 00:03:06.549 CC app/spdk_top/spdk_top.o 00:03:06.549 TEST_HEADER include/spdk/dif.h 00:03:06.549 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.549 TEST_HEADER include/spdk/dma.h 00:03:06.549 TEST_HEADER include/spdk/endian.h 00:03:06.549 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.549 TEST_HEADER include/spdk/env.h 00:03:06.549 TEST_HEADER include/spdk/event.h 00:03:06.549 CC app/nvmf_tgt/nvmf_main.o 00:03:06.549 TEST_HEADER include/spdk/fd.h 00:03:06.549 TEST_HEADER include/spdk/fd_group.h 00:03:06.549 TEST_HEADER include/spdk/ftl.h 00:03:06.549 TEST_HEADER include/spdk/file.h 00:03:06.549 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.549 TEST_HEADER include/spdk/hexlify.h 00:03:06.549 TEST_HEADER include/spdk/histogram_data.h 00:03:06.549 CC app/spdk_dd/spdk_dd.o 00:03:06.549 TEST_HEADER include/spdk/idxd.h 00:03:06.549 TEST_HEADER include/spdk/init.h 00:03:06.549 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.549 TEST_HEADER include/spdk/ioat.h 00:03:06.549 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.549 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.549 TEST_HEADER include/spdk/json.h 00:03:06.549 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.549 TEST_HEADER include/spdk/likely.h 00:03:06.549 TEST_HEADER include/spdk/log.h 00:03:06.549 TEST_HEADER include/spdk/memory.h 00:03:06.549 TEST_HEADER include/spdk/lvol.h 00:03:06.549 CC app/spdk_tgt/spdk_tgt.o 00:03:06.549 TEST_HEADER include/spdk/mmio.h 00:03:06.549 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.549 TEST_HEADER include/spdk/nvme.h 00:03:06.549 TEST_HEADER include/spdk/notify.h 00:03:06.549 TEST_HEADER include/spdk/nbd.h 00:03:06.549 CC app/vhost/vhost.o 00:03:06.549 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.549 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.549 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.549 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.549 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.549 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.549 TEST_HEADER include/spdk/nvmf.h 00:03:06.549 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.549 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.549 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.549 TEST_HEADER include/spdk/opal.h 00:03:06.549 TEST_HEADER include/spdk/pci_ids.h 00:03:06.549 TEST_HEADER include/spdk/opal_spec.h 00:03:06.549 TEST_HEADER include/spdk/pipe.h 00:03:06.549 TEST_HEADER include/spdk/queue.h 00:03:06.816 TEST_HEADER include/spdk/reduce.h 00:03:06.816 TEST_HEADER include/spdk/rpc.h 00:03:06.816 TEST_HEADER include/spdk/scsi.h 00:03:06.816 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.816 TEST_HEADER include/spdk/scheduler.h 00:03:06.816 TEST_HEADER include/spdk/sock.h 00:03:06.816 TEST_HEADER include/spdk/string.h 00:03:06.816 TEST_HEADER include/spdk/stdinc.h 00:03:06.816 CC test/env/memory/memory_ut.o 00:03:06.816 TEST_HEADER include/spdk/thread.h 00:03:06.816 TEST_HEADER include/spdk/trace_parser.h 00:03:06.816 TEST_HEADER include/spdk/trace.h 00:03:06.816 TEST_HEADER include/spdk/tree.h 00:03:06.816 TEST_HEADER include/spdk/ublk.h 00:03:06.816 TEST_HEADER include/spdk/uuid.h 00:03:06.816 TEST_HEADER include/spdk/util.h 00:03:06.816 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.816 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.816 TEST_HEADER include/spdk/version.h 00:03:06.816 TEST_HEADER include/spdk/vhost.h 00:03:06.816 TEST_HEADER include/spdk/vmd.h 00:03:06.816 TEST_HEADER include/spdk/xor.h 00:03:06.816 TEST_HEADER include/spdk/zipf.h 00:03:06.816 CXX test/cpp_headers/accel.o 00:03:06.816 CC test/env/pci/pci_ut.o 00:03:06.816 CXX test/cpp_headers/accel_module.o 00:03:06.816 CXX test/cpp_headers/assert.o 00:03:06.816 CXX test/cpp_headers/base64.o 00:03:06.816 CXX test/cpp_headers/barrier.o 00:03:06.816 CC test/env/vtophys/vtophys.o 00:03:06.816 CXX test/cpp_headers/bdev.o 00:03:06.816 CXX test/cpp_headers/bit_array.o 00:03:06.816 CXX test/cpp_headers/bdev_module.o 00:03:06.816 CXX test/cpp_headers/bdev_zone.o 00:03:06.816 CXX test/cpp_headers/bit_pool.o 00:03:06.816 CXX test/cpp_headers/blob_bdev.o 00:03:06.816 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.816 CXX test/cpp_headers/config.o 00:03:06.816 CXX test/cpp_headers/blob.o 00:03:06.816 CC test/nvme/sgl/sgl.o 00:03:06.816 CXX test/cpp_headers/conf.o 00:03:06.816 CXX test/cpp_headers/blobfs.o 00:03:06.816 CC test/nvme/simple_copy/simple_copy.o 00:03:06.816 CXX test/cpp_headers/cpuset.o 00:03:06.816 CXX test/cpp_headers/crc16.o 00:03:06.816 CXX test/cpp_headers/crc32.o 00:03:06.816 CC test/nvme/startup/startup.o 00:03:06.816 CXX test/cpp_headers/dif.o 00:03:06.816 CXX test/cpp_headers/crc64.o 00:03:06.816 CXX test/cpp_headers/dma.o 00:03:06.816 CXX test/cpp_headers/endian.o 00:03:06.816 CXX test/cpp_headers/env_dpdk.o 00:03:06.816 CXX test/cpp_headers/event.o 00:03:06.816 CXX test/cpp_headers/fd_group.o 00:03:06.816 CXX test/cpp_headers/env.o 00:03:06.816 CC test/nvme/overhead/overhead.o 00:03:06.816 CC test/app/stub/stub.o 00:03:06.816 CC test/nvme/err_injection/err_injection.o 00:03:06.816 CXX test/cpp_headers/fd.o 00:03:06.816 CXX test/cpp_headers/ftl.o 00:03:06.816 CXX test/cpp_headers/file.o 00:03:06.816 CXX test/cpp_headers/gpt_spec.o 00:03:06.816 CXX test/cpp_headers/hexlify.o 00:03:06.816 CC examples/sock/hello_world/hello_sock.o 00:03:06.816 CXX test/cpp_headers/histogram_data.o 00:03:06.816 CC test/nvme/connect_stress/connect_stress.o 00:03:06.816 CC test/nvme/reset/reset.o 00:03:06.816 CXX test/cpp_headers/idxd.o 00:03:06.816 CXX test/cpp_headers/ioat.o 00:03:06.816 CC test/nvme/boot_partition/boot_partition.o 00:03:06.816 CXX test/cpp_headers/idxd_spec.o 00:03:06.816 CC test/app/histogram_perf/histogram_perf.o 00:03:06.816 CXX test/cpp_headers/init.o 00:03:06.816 CC test/app/jsoncat/jsoncat.o 00:03:06.816 CC test/nvme/e2edp/nvme_dp.o 00:03:06.816 CXX test/cpp_headers/ioat_spec.o 00:03:06.816 CXX test/cpp_headers/iscsi_spec.o 00:03:06.816 CXX test/cpp_headers/json.o 00:03:06.816 CC test/thread/poller_perf/poller_perf.o 00:03:06.816 CC test/nvme/aer/aer.o 00:03:06.816 CXX test/cpp_headers/likely.o 00:03:06.816 CXX test/cpp_headers/jsonrpc.o 00:03:06.816 CXX test/cpp_headers/log.o 00:03:06.816 CXX test/cpp_headers/lvol.o 00:03:06.816 CC test/nvme/compliance/nvme_compliance.o 00:03:06.816 CXX test/cpp_headers/memory.o 00:03:06.816 CXX test/cpp_headers/mmio.o 00:03:06.816 CXX test/cpp_headers/nbd.o 00:03:06.816 CXX test/cpp_headers/notify.o 00:03:06.816 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:06.816 CXX test/cpp_headers/nvme.o 00:03:06.816 CC test/nvme/fdp/fdp.o 00:03:06.816 CXX test/cpp_headers/nvme_intel.o 00:03:06.816 CXX test/cpp_headers/nvme_spec.o 00:03:06.816 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.816 CC test/nvme/cuse/cuse.o 00:03:06.816 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.816 CXX test/cpp_headers/nvme_zns.o 00:03:06.816 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.816 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.816 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.816 CXX test/cpp_headers/nvmf.o 00:03:06.816 CC test/nvme/reserve/reserve.o 00:03:06.816 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.816 CXX test/cpp_headers/nvmf_spec.o 00:03:06.816 CXX test/cpp_headers/opal.o 00:03:06.816 CXX test/cpp_headers/nvmf_transport.o 00:03:06.816 CC test/accel/dif/dif.o 00:03:06.816 CC test/event/event_perf/event_perf.o 00:03:06.816 CXX test/cpp_headers/opal_spec.o 00:03:06.816 CC examples/idxd/perf/perf.o 00:03:06.816 CXX test/cpp_headers/pci_ids.o 00:03:06.816 CC app/fio/nvme/fio_plugin.o 00:03:06.816 CXX test/cpp_headers/pipe.o 00:03:06.816 CC examples/vmd/led/led.o 00:03:06.816 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.816 CXX test/cpp_headers/queue.o 00:03:06.816 CC examples/util/zipf/zipf.o 00:03:06.816 CXX test/cpp_headers/reduce.o 00:03:06.816 CXX test/cpp_headers/rpc.o 00:03:06.816 CXX test/cpp_headers/scheduler.o 00:03:06.816 CC examples/nvme/hotplug/hotplug.o 00:03:06.816 CC examples/nvme/hello_world/hello_world.o 00:03:06.816 CC test/app/bdev_svc/bdev_svc.o 00:03:06.816 CC test/blobfs/mkfs/mkfs.o 00:03:06.816 CC test/event/reactor_perf/reactor_perf.o 00:03:06.816 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.816 CXX test/cpp_headers/scsi.o 00:03:06.816 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.816 CC test/dma/test_dma/test_dma.o 00:03:06.816 CC test/event/reactor/reactor.o 00:03:06.816 CC examples/nvme/abort/abort.o 00:03:06.816 CC examples/accel/perf/accel_perf.o 00:03:06.816 CC examples/ioat/perf/perf.o 00:03:06.816 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.816 CC test/event/app_repeat/app_repeat.o 00:03:06.816 CC examples/ioat/verify/verify.o 00:03:06.816 CC examples/nvme/reconnect/reconnect.o 00:03:06.816 CC examples/nvmf/nvmf/nvmf.o 00:03:06.816 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.816 CC examples/blob/cli/blobcli.o 00:03:06.816 CC app/fio/bdev/fio_plugin.o 00:03:06.816 CC examples/nvme/arbitration/arbitration.o 00:03:06.816 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.816 CC test/event/scheduler/scheduler.o 00:03:06.816 CC test/bdev/bdevio/bdevio.o 00:03:06.816 CXX test/cpp_headers/scsi_spec.o 00:03:06.816 CC examples/blob/hello_world/hello_blob.o 00:03:06.816 CC examples/thread/thread/thread_ex.o 00:03:07.086 CXX test/cpp_headers/sock.o 00:03:07.086 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.086 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.086 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.086 CC test/lvol/esnap/esnap.o 00:03:07.086 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.086 LINK spdk_lspci 00:03:07.086 LINK rpc_client_test 00:03:07.351 LINK interrupt_tgt 00:03:07.352 LINK spdk_trace_record 00:03:07.352 LINK nvmf_tgt 00:03:07.352 LINK iscsi_tgt 00:03:07.352 LINK spdk_nvme_discover 00:03:07.352 LINK vhost 00:03:07.352 LINK vtophys 00:03:07.352 LINK histogram_perf 00:03:07.352 LINK jsoncat 00:03:07.352 LINK startup 00:03:07.352 LINK boot_partition 00:03:07.352 LINK spdk_tgt 00:03:07.352 LINK event_perf 00:03:07.352 LINK connect_stress 00:03:07.610 LINK err_injection 00:03:07.610 LINK env_dpdk_post_init 00:03:07.610 LINK led 00:03:07.610 LINK reactor_perf 00:03:07.610 LINK lsvmd 00:03:07.610 LINK poller_perf 00:03:07.610 LINK reactor 00:03:07.610 LINK pmr_persistence 00:03:07.610 LINK zipf 00:03:07.610 LINK reserve 00:03:07.610 LINK stub 00:03:07.610 LINK fused_ordering 00:03:07.610 LINK simple_copy 00:03:07.610 LINK cmb_copy 00:03:07.610 CXX test/cpp_headers/stdinc.o 00:03:07.610 LINK reset 00:03:07.610 LINK bdev_svc 00:03:07.610 LINK hello_sock 00:03:07.610 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.610 LINK app_repeat 00:03:07.610 LINK mkfs 00:03:07.610 LINK verify 00:03:07.610 CXX test/cpp_headers/string.o 00:03:07.610 CXX test/cpp_headers/thread.o 00:03:07.610 LINK nvme_dp 00:03:07.610 LINK doorbell_aers 00:03:07.610 CXX test/cpp_headers/trace.o 00:03:07.610 CXX test/cpp_headers/trace_parser.o 00:03:07.610 CXX test/cpp_headers/tree.o 00:03:07.610 CXX test/cpp_headers/ublk.o 00:03:07.610 CXX test/cpp_headers/util.o 00:03:07.610 CXX test/cpp_headers/uuid.o 00:03:07.610 CXX test/cpp_headers/version.o 00:03:07.610 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.610 LINK hello_world 00:03:07.610 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.610 CXX test/cpp_headers/vhost.o 00:03:07.610 LINK hello_bdev 00:03:07.610 LINK ioat_perf 00:03:07.610 CXX test/cpp_headers/vmd.o 00:03:07.610 CXX test/cpp_headers/xor.o 00:03:07.610 LINK scheduler 00:03:07.610 CXX test/cpp_headers/zipf.o 00:03:07.610 LINK hotplug 00:03:07.611 LINK spdk_dd 00:03:07.611 LINK sgl 00:03:07.611 LINK overhead 00:03:07.611 LINK nvmf 00:03:07.871 LINK fdp 00:03:07.871 LINK aer 00:03:07.871 LINK spdk_trace 00:03:07.871 LINK hello_blob 00:03:07.871 LINK nvme_compliance 00:03:07.871 LINK thread 00:03:07.871 LINK idxd_perf 00:03:07.871 LINK pci_ut 00:03:07.871 LINK arbitration 00:03:07.871 LINK reconnect 00:03:07.871 LINK dif 00:03:07.871 LINK abort 00:03:07.871 LINK bdevio 00:03:07.871 LINK test_dma 00:03:07.871 LINK accel_perf 00:03:07.871 LINK nvme_fuzz 00:03:07.871 LINK nvme_manage 00:03:08.145 LINK spdk_nvme 00:03:08.145 LINK blobcli 00:03:08.145 LINK spdk_bdev 00:03:08.145 LINK spdk_nvme_perf 00:03:08.145 LINK mem_callbacks 00:03:08.145 LINK vhost_fuzz 00:03:08.145 LINK spdk_nvme_identify 00:03:08.145 LINK memory_ut 00:03:08.145 LINK bdevperf 00:03:08.411 LINK spdk_top 00:03:08.411 LINK cuse 00:03:08.983 LINK iscsi_fuzz 00:03:10.899 LINK esnap 00:03:11.471 00:03:11.471 real 0m32.421s 00:03:11.471 user 5m4.189s 00:03:11.471 sys 3m13.725s 00:03:11.471 22:59:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:11.471 22:59:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.472 ************************************ 00:03:11.472 END TEST make 00:03:11.472 ************************************ 00:03:11.472 22:59:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.472 22:59:34 -- nvmf/common.sh@7 -- # uname -s 00:03:11.472 22:59:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.472 22:59:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.472 22:59:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.472 22:59:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.472 22:59:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.472 22:59:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.472 22:59:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.472 22:59:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.472 22:59:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.472 22:59:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.472 22:59:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.472 22:59:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.472 22:59:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.472 22:59:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.472 22:59:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.472 22:59:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.472 22:59:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.472 22:59:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.472 22:59:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.472 22:59:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.472 22:59:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.472 22:59:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.472 22:59:34 -- paths/export.sh@5 -- # export PATH 00:03:11.472 22:59:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.472 22:59:34 -- nvmf/common.sh@46 -- # : 0 00:03:11.472 22:59:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:11.472 22:59:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:11.472 22:59:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:11.472 22:59:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.472 22:59:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.472 22:59:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:11.472 22:59:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:11.472 22:59:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:11.472 22:59:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.472 22:59:34 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.472 22:59:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.472 22:59:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.472 22:59:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.472 22:59:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.472 22:59:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.472 22:59:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.472 22:59:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.472 22:59:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.472 22:59:34 -- spdk/autotest.sh@48 -- # udevadm_pid=2555647 00:03:11.472 22:59:34 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:11.472 22:59:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.472 22:59:34 -- spdk/autotest.sh@54 -- # echo 2555649 00:03:11.472 22:59:34 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:11.472 22:59:34 -- spdk/autotest.sh@56 -- # echo 2555650 00:03:11.472 22:59:34 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:11.472 22:59:34 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:11.472 22:59:34 -- spdk/autotest.sh@60 -- # echo 2555651 00:03:11.472 22:59:34 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:11.472 22:59:34 -- spdk/autotest.sh@62 -- # echo 2555652 00:03:11.472 22:59:34 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.472 22:59:34 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:11.472 22:59:34 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:11.472 22:59:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:11.472 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:03:11.472 22:59:34 -- spdk/autotest.sh@70 -- # create_test_list 00:03:11.472 22:59:34 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:11.472 22:59:34 -- common/autotest_common.sh@10 -- # set +x 00:03:11.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:11.472 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:11.734 22:59:34 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:11.734 22:59:34 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.734 22:59:34 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.734 22:59:34 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:11.734 22:59:34 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.734 22:59:34 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:11.734 22:59:34 -- common/autotest_common.sh@1440 -- # uname 00:03:11.734 22:59:34 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:11.734 22:59:34 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:11.734 22:59:34 -- common/autotest_common.sh@1460 -- # uname 00:03:11.734 22:59:34 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:11.734 22:59:34 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:11.734 22:59:34 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:11.734 22:59:34 -- spdk/autotest.sh@83 -- # hash lcov 00:03:11.734 22:59:34 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:11.734 22:59:34 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:11.734 --rc lcov_branch_coverage=1 00:03:11.734 --rc lcov_function_coverage=1 00:03:11.734 --rc genhtml_branch_coverage=1 00:03:11.734 --rc genhtml_function_coverage=1 00:03:11.734 --rc genhtml_legend=1 00:03:11.734 --rc geninfo_all_blocks=1 00:03:11.734 ' 00:03:11.734 22:59:34 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:11.734 --rc lcov_branch_coverage=1 00:03:11.734 --rc lcov_function_coverage=1 00:03:11.734 --rc genhtml_branch_coverage=1 00:03:11.734 --rc genhtml_function_coverage=1 00:03:11.734 --rc genhtml_legend=1 00:03:11.734 --rc geninfo_all_blocks=1 00:03:11.734 ' 00:03:11.734 22:59:34 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:11.734 --rc lcov_branch_coverage=1 00:03:11.734 --rc lcov_function_coverage=1 00:03:11.734 --rc genhtml_branch_coverage=1 00:03:11.734 --rc genhtml_function_coverage=1 00:03:11.734 --rc genhtml_legend=1 00:03:11.734 --rc geninfo_all_blocks=1 00:03:11.734 --no-external' 00:03:11.734 22:59:34 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:11.734 --rc lcov_branch_coverage=1 00:03:11.734 --rc lcov_function_coverage=1 00:03:11.734 --rc genhtml_branch_coverage=1 00:03:11.735 --rc genhtml_function_coverage=1 00:03:11.735 --rc genhtml_legend=1 00:03:11.735 --rc geninfo_all_blocks=1 00:03:11.735 --no-external' 00:03:11.735 22:59:34 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:11.735 lcov: LCOV version 1.14 00:03:11.735 22:59:34 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:23.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:23.969 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:23.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:23.969 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:23.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:23.969 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:38.872 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:38.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:38.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:38.873 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:39.133 23:00:01 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:39.133 23:00:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:39.133 23:00:01 -- common/autotest_common.sh@10 -- # set +x 00:03:39.133 23:00:01 -- spdk/autotest.sh@102 -- # rm -f 00:03:39.133 23:00:01 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.333 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:43.333 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:43.333 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:43.333 23:00:05 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:43.333 23:00:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:43.333 23:00:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:43.333 23:00:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:43.333 23:00:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:43.333 23:00:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:43.333 23:00:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:43.333 23:00:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.333 23:00:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:43.333 23:00:05 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:43.333 23:00:05 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:43.333 23:00:05 -- spdk/autotest.sh@121 -- # grep -v p 00:03:43.333 23:00:05 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:43.333 23:00:05 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:43.333 23:00:05 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:43.333 23:00:05 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:43.333 23:00:05 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.333 No valid GPT data, bailing 00:03:43.333 23:00:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.333 23:00:05 -- scripts/common.sh@393 -- # pt= 00:03:43.333 23:00:05 -- scripts/common.sh@394 -- # return 1 00:03:43.333 23:00:05 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:43.333 1+0 records in 00:03:43.333 1+0 records out 00:03:43.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00166136 s, 631 MB/s 00:03:43.333 23:00:05 -- spdk/autotest.sh@129 -- # sync 00:03:43.333 23:00:05 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:43.333 23:00:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:43.333 23:00:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.477 23:00:13 -- spdk/autotest.sh@135 -- # uname -s 00:03:51.477 23:00:13 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:51.477 23:00:13 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.477 23:00:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.477 23:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.477 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:03:51.477 ************************************ 00:03:51.477 START TEST setup.sh 00:03:51.477 ************************************ 00:03:51.477 23:00:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:51.477 * Looking for test storage... 00:03:51.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.477 23:00:13 -- setup/test-setup.sh@10 -- # uname -s 00:03:51.477 23:00:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:51.477 23:00:13 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.477 23:00:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.477 23:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.477 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:03:51.477 ************************************ 00:03:51.477 START TEST acl 00:03:51.477 ************************************ 00:03:51.477 23:00:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:51.477 * Looking for test storage... 00:03:51.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:51.477 23:00:13 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:51.477 23:00:13 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:51.477 23:00:13 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:51.477 23:00:13 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:51.477 23:00:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:51.477 23:00:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:51.477 23:00:13 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:51.477 23:00:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.477 23:00:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:51.477 23:00:13 -- setup/acl.sh@12 -- # devs=() 00:03:51.477 23:00:13 -- setup/acl.sh@12 -- # declare -a devs 00:03:51.477 23:00:13 -- setup/acl.sh@13 -- # drivers=() 00:03:51.477 23:00:13 -- setup/acl.sh@13 -- # declare -A drivers 00:03:51.477 23:00:13 -- setup/acl.sh@51 -- # setup reset 00:03:51.477 23:00:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.477 23:00:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.689 23:00:17 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:55.689 23:00:17 -- setup/acl.sh@16 -- # local dev driver 00:03:55.689 23:00:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.689 23:00:17 -- setup/acl.sh@15 -- # setup output status 00:03:55.689 23:00:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.689 23:00:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:58.234 Hugepages 00:03:58.235 node hugesize free / total 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # continue 00:03:58.235 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # continue 00:03:58.235 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.235 23:00:20 -- setup/acl.sh@19 -- # continue 00:03:58.235 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 00:03:58.496 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:20 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:20 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.496 23:00:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.496 23:00:21 -- setup/acl.sh@20 -- # continue 00:03:58.496 23:00:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.496 23:00:21 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:58.496 23:00:21 -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.496 23:00:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.496 23:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.496 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:03:58.496 ************************************ 00:03:58.496 START TEST denied 00:03:58.496 ************************************ 00:03:58.496 23:00:21 -- common/autotest_common.sh@1104 -- # denied 00:03:58.496 23:00:21 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:58.496 23:00:21 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:58.496 23:00:21 -- setup/acl.sh@38 -- # setup output config 00:03:58.496 23:00:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.496 23:00:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.759 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:02.759 23:00:24 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:02.759 23:00:24 -- setup/acl.sh@28 -- # local dev driver 00:04:02.759 23:00:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:02.759 23:00:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:02.759 23:00:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:02.759 23:00:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:02.759 23:00:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:02.759 23:00:24 -- setup/acl.sh@41 -- # setup reset 00:04:02.759 23:00:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.759 23:00:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.961 00:04:06.961 real 0m8.309s 00:04:06.961 user 0m2.729s 00:04:06.961 sys 0m4.923s 00:04:06.961 23:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.961 23:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.961 ************************************ 00:04:06.961 END TEST denied 00:04:06.961 ************************************ 00:04:06.961 23:00:29 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:06.961 23:00:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.961 23:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.961 23:00:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.961 ************************************ 00:04:06.961 START TEST allowed 00:04:06.961 ************************************ 00:04:06.961 23:00:29 -- common/autotest_common.sh@1104 -- # allowed 00:04:06.961 23:00:29 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:06.962 23:00:29 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:06.962 23:00:29 -- setup/acl.sh@45 -- # setup output config 00:04:06.962 23:00:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.962 23:00:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.253 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:12.253 23:00:34 -- setup/acl.sh@47 -- # verify 00:04:12.253 23:00:34 -- setup/acl.sh@28 -- # local dev driver 00:04:12.253 23:00:34 -- setup/acl.sh@48 -- # setup reset 00:04:12.253 23:00:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.253 23:00:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.552 00:04:15.552 real 0m8.632s 00:04:15.552 user 0m2.330s 00:04:15.552 sys 0m4.499s 00:04:15.552 23:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.552 23:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.552 ************************************ 00:04:15.552 END TEST allowed 00:04:15.552 ************************************ 00:04:15.552 00:04:15.552 real 0m24.443s 00:04:15.552 user 0m7.824s 00:04:15.552 sys 0m14.366s 00:04:15.552 23:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.552 23:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.552 ************************************ 00:04:15.552 END TEST acl 00:04:15.552 ************************************ 00:04:15.552 23:00:38 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:15.552 23:00:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.552 23:00:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.552 23:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.552 ************************************ 00:04:15.552 START TEST hugepages 00:04:15.552 ************************************ 00:04:15.552 23:00:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:15.815 * Looking for test storage... 00:04:15.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.815 23:00:38 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:15.815 23:00:38 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:15.815 23:00:38 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:15.815 23:00:38 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:15.815 23:00:38 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:15.815 23:00:38 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:15.815 23:00:38 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:15.815 23:00:38 -- setup/common.sh@18 -- # local node= 00:04:15.815 23:00:38 -- setup/common.sh@19 -- # local var val 00:04:15.815 23:00:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.815 23:00:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.815 23:00:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.815 23:00:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.815 23:00:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.815 23:00:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 105614536 kB' 'MemAvailable: 108961980 kB' 'Buffers: 4132 kB' 'Cached: 11816624 kB' 'SwapCached: 0 kB' 'Active: 8887412 kB' 'Inactive: 3525644 kB' 'Active(anon): 8397328 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595552 kB' 'Mapped: 209408 kB' 'Shmem: 7805028 kB' 'KReclaimable: 312096 kB' 'Slab: 1153316 kB' 'SReclaimable: 312096 kB' 'SUnreclaim: 841220 kB' 'KernelStack: 27696 kB' 'PageTables: 9788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 9945796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235932 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.815 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.815 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.816 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.816 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # continue 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.817 23:00:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.817 23:00:38 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:15.817 23:00:38 -- setup/common.sh@33 -- # echo 2048 00:04:15.817 23:00:38 -- setup/common.sh@33 -- # return 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:15.817 23:00:38 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:15.817 23:00:38 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:15.817 23:00:38 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:15.817 23:00:38 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:15.817 23:00:38 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:15.817 23:00:38 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:15.817 23:00:38 -- setup/hugepages.sh@207 -- # get_nodes 00:04:15.817 23:00:38 -- setup/hugepages.sh@27 -- # local node 00:04:15.817 23:00:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.817 23:00:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:15.817 23:00:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.817 23:00:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.817 23:00:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.817 23:00:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.817 23:00:38 -- setup/hugepages.sh@208 -- # clear_hp 00:04:15.817 23:00:38 -- setup/hugepages.sh@37 -- # local node hp 00:04:15.817 23:00:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.817 23:00:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.817 23:00:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.817 23:00:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.817 23:00:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.817 23:00:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.817 23:00:38 -- setup/hugepages.sh@41 -- # echo 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.817 23:00:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.817 23:00:38 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:15.817 23:00:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.817 23:00:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.817 23:00:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.817 ************************************ 00:04:15.817 START TEST default_setup 00:04:15.817 ************************************ 00:04:15.817 23:00:38 -- common/autotest_common.sh@1104 -- # default_setup 00:04:15.817 23:00:38 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.817 23:00:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.817 23:00:38 -- setup/hugepages.sh@51 -- # shift 00:04:15.817 23:00:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.817 23:00:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.817 23:00:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.817 23:00:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.817 23:00:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.817 23:00:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.817 23:00:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.817 23:00:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.817 23:00:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.817 23:00:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.817 23:00:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.817 23:00:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.817 23:00:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.817 23:00:38 -- setup/hugepages.sh@73 -- # return 0 00:04:15.817 23:00:38 -- setup/hugepages.sh@137 -- # setup output 00:04:15.817 23:00:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.817 23:00:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.122 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:19.122 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:19.387 23:00:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:19.387 23:00:41 -- setup/hugepages.sh@89 -- # local node 00:04:19.387 23:00:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.387 23:00:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.387 23:00:41 -- setup/hugepages.sh@92 -- # local surp 00:04:19.387 23:00:41 -- setup/hugepages.sh@93 -- # local resv 00:04:19.387 23:00:41 -- setup/hugepages.sh@94 -- # local anon 00:04:19.387 23:00:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.387 23:00:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.387 23:00:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.387 23:00:41 -- setup/common.sh@18 -- # local node= 00:04:19.387 23:00:41 -- setup/common.sh@19 -- # local var val 00:04:19.387 23:00:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.387 23:00:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.387 23:00:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.387 23:00:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.387 23:00:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.387 23:00:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107809840 kB' 'MemAvailable: 111156896 kB' 'Buffers: 4132 kB' 'Cached: 11816748 kB' 'SwapCached: 0 kB' 'Active: 8904772 kB' 'Inactive: 3525644 kB' 'Active(anon): 8414688 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613008 kB' 'Mapped: 209692 kB' 'Shmem: 7805152 kB' 'KReclaimable: 311320 kB' 'Slab: 1150960 kB' 'SReclaimable: 311320 kB' 'SUnreclaim: 839640 kB' 'KernelStack: 27680 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9965860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235964 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.387 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.387 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.388 23:00:41 -- setup/common.sh@33 -- # echo 0 00:04:19.388 23:00:41 -- setup/common.sh@33 -- # return 0 00:04:19.388 23:00:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.388 23:00:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.388 23:00:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.388 23:00:41 -- setup/common.sh@18 -- # local node= 00:04:19.388 23:00:41 -- setup/common.sh@19 -- # local var val 00:04:19.388 23:00:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.388 23:00:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.388 23:00:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.388 23:00:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.388 23:00:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.388 23:00:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.388 23:00:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107816504 kB' 'MemAvailable: 111163560 kB' 'Buffers: 4132 kB' 'Cached: 11816752 kB' 'SwapCached: 0 kB' 'Active: 8906920 kB' 'Inactive: 3525644 kB' 'Active(anon): 8416836 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614932 kB' 'Mapped: 210196 kB' 'Shmem: 7805156 kB' 'KReclaimable: 311320 kB' 'Slab: 1150948 kB' 'SReclaimable: 311320 kB' 'SUnreclaim: 839628 kB' 'KernelStack: 27568 kB' 'PageTables: 9468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9969652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.388 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.388 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.389 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.389 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.390 23:00:41 -- setup/common.sh@33 -- # echo 0 00:04:19.390 23:00:41 -- setup/common.sh@33 -- # return 0 00:04:19.390 23:00:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.390 23:00:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.390 23:00:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.390 23:00:41 -- setup/common.sh@18 -- # local node= 00:04:19.390 23:00:41 -- setup/common.sh@19 -- # local var val 00:04:19.390 23:00:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.390 23:00:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.390 23:00:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.390 23:00:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.390 23:00:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.390 23:00:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107809208 kB' 'MemAvailable: 111156256 kB' 'Buffers: 4132 kB' 'Cached: 11816764 kB' 'SwapCached: 0 kB' 'Active: 8910420 kB' 'Inactive: 3525644 kB' 'Active(anon): 8420336 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618728 kB' 'Mapped: 210608 kB' 'Shmem: 7805168 kB' 'KReclaimable: 311304 kB' 'Slab: 1150924 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839620 kB' 'KernelStack: 27600 kB' 'PageTables: 9532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9973644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235840 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.390 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.390 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.391 23:00:41 -- setup/common.sh@33 -- # echo 0 00:04:19.391 23:00:41 -- setup/common.sh@33 -- # return 0 00:04:19.391 23:00:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.391 23:00:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.391 nr_hugepages=1024 00:04:19.391 23:00:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.391 resv_hugepages=0 00:04:19.391 23:00:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.391 surplus_hugepages=0 00:04:19.391 23:00:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.391 anon_hugepages=0 00:04:19.391 23:00:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.391 23:00:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.391 23:00:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.391 23:00:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.391 23:00:41 -- setup/common.sh@18 -- # local node= 00:04:19.391 23:00:41 -- setup/common.sh@19 -- # local var val 00:04:19.391 23:00:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.391 23:00:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.391 23:00:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.391 23:00:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.391 23:00:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.391 23:00:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107806800 kB' 'MemAvailable: 111153848 kB' 'Buffers: 4132 kB' 'Cached: 11816780 kB' 'SwapCached: 0 kB' 'Active: 8905216 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415132 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613456 kB' 'Mapped: 210104 kB' 'Shmem: 7805184 kB' 'KReclaimable: 311304 kB' 'Slab: 1150916 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839612 kB' 'KernelStack: 27616 kB' 'PageTables: 9804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9965904 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.391 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.391 23:00:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.392 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.392 23:00:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.393 23:00:41 -- setup/common.sh@33 -- # echo 1024 00:04:19.393 23:00:41 -- setup/common.sh@33 -- # return 0 00:04:19.393 23:00:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.393 23:00:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.393 23:00:41 -- setup/hugepages.sh@27 -- # local node 00:04:19.393 23:00:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.393 23:00:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.393 23:00:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.393 23:00:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.393 23:00:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.393 23:00:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.393 23:00:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.393 23:00:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.393 23:00:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.393 23:00:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.393 23:00:41 -- setup/common.sh@18 -- # local node=0 00:04:19.393 23:00:41 -- setup/common.sh@19 -- # local var val 00:04:19.393 23:00:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.393 23:00:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.393 23:00:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.393 23:00:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.393 23:00:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.393 23:00:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52475716 kB' 'MemUsed: 13183292 kB' 'SwapCached: 0 kB' 'Active: 5532128 kB' 'Inactive: 3325404 kB' 'Active(anon): 5196672 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8654740 kB' 'Mapped: 132360 kB' 'AnonPages: 206080 kB' 'Shmem: 4993880 kB' 'KernelStack: 13496 kB' 'PageTables: 5680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177860 kB' 'Slab: 631576 kB' 'SReclaimable: 177860 kB' 'SUnreclaim: 453716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.393 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.393 23:00:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # continue 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.394 23:00:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.394 23:00:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.394 23:00:41 -- setup/common.sh@33 -- # echo 0 00:04:19.394 23:00:41 -- setup/common.sh@33 -- # return 0 00:04:19.394 23:00:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.394 23:00:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.394 23:00:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.394 23:00:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.394 23:00:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.394 node0=1024 expecting 1024 00:04:19.394 23:00:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.394 00:04:19.394 real 0m3.620s 00:04:19.394 user 0m1.363s 00:04:19.394 sys 0m2.193s 00:04:19.394 23:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.394 23:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:19.394 ************************************ 00:04:19.394 END TEST default_setup 00:04:19.394 ************************************ 00:04:19.394 23:00:42 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:19.394 23:00:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.394 23:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.394 23:00:42 -- common/autotest_common.sh@10 -- # set +x 00:04:19.394 ************************************ 00:04:19.394 START TEST per_node_1G_alloc 00:04:19.394 ************************************ 00:04:19.394 23:00:42 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:19.394 23:00:42 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:19.394 23:00:42 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:19.394 23:00:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:19.394 23:00:42 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:19.394 23:00:42 -- setup/hugepages.sh@51 -- # shift 00:04:19.394 23:00:42 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:19.394 23:00:42 -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.394 23:00:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.394 23:00:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:19.394 23:00:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:19.394 23:00:42 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:19.394 23:00:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.394 23:00:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:19.394 23:00:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.394 23:00:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.394 23:00:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.394 23:00:42 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:19.394 23:00:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.394 23:00:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:19.394 23:00:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.394 23:00:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:19.394 23:00:42 -- setup/hugepages.sh@73 -- # return 0 00:04:19.394 23:00:42 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:19.394 23:00:42 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:19.394 23:00:42 -- setup/hugepages.sh@146 -- # setup output 00:04:19.394 23:00:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.394 23:00:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:23.605 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:23.605 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:23.605 23:00:45 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:23.605 23:00:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:23.605 23:00:45 -- setup/hugepages.sh@89 -- # local node 00:04:23.605 23:00:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.605 23:00:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.605 23:00:45 -- setup/hugepages.sh@92 -- # local surp 00:04:23.605 23:00:45 -- setup/hugepages.sh@93 -- # local resv 00:04:23.605 23:00:45 -- setup/hugepages.sh@94 -- # local anon 00:04:23.605 23:00:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.605 23:00:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.605 23:00:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.605 23:00:45 -- setup/common.sh@18 -- # local node= 00:04:23.605 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.605 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.605 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.605 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.605 23:00:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.605 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.605 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107805624 kB' 'MemAvailable: 111152672 kB' 'Buffers: 4132 kB' 'Cached: 11816880 kB' 'SwapCached: 0 kB' 'Active: 8905220 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415136 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613244 kB' 'Mapped: 209704 kB' 'Shmem: 7805284 kB' 'KReclaimable: 311304 kB' 'Slab: 1150868 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839564 kB' 'KernelStack: 27536 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9968524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236012 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.605 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.605 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.606 23:00:45 -- setup/common.sh@33 -- # echo 0 00:04:23.606 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.606 23:00:45 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.606 23:00:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.606 23:00:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.606 23:00:45 -- setup/common.sh@18 -- # local node= 00:04:23.606 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.606 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.606 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.606 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.606 23:00:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.606 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.606 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107806296 kB' 'MemAvailable: 111153344 kB' 'Buffers: 4132 kB' 'Cached: 11816888 kB' 'SwapCached: 0 kB' 'Active: 8905892 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415808 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613884 kB' 'Mapped: 209704 kB' 'Shmem: 7805292 kB' 'KReclaimable: 311304 kB' 'Slab: 1150868 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839564 kB' 'KernelStack: 27648 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9968540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235964 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.606 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.606 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.607 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.607 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.608 23:00:45 -- setup/common.sh@33 -- # echo 0 00:04:23.608 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.608 23:00:45 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.608 23:00:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.608 23:00:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.608 23:00:45 -- setup/common.sh@18 -- # local node= 00:04:23.608 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.608 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.608 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.608 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.608 23:00:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.608 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.608 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107806664 kB' 'MemAvailable: 111153712 kB' 'Buffers: 4132 kB' 'Cached: 11816900 kB' 'SwapCached: 0 kB' 'Active: 8905712 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415628 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613872 kB' 'Mapped: 209704 kB' 'Shmem: 7805304 kB' 'KReclaimable: 311304 kB' 'Slab: 1150992 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839688 kB' 'KernelStack: 27712 kB' 'PageTables: 9552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9969312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235932 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.608 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.608 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.609 23:00:45 -- setup/common.sh@33 -- # echo 0 00:04:23.609 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.609 23:00:45 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.609 23:00:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.609 nr_hugepages=1024 00:04:23.609 23:00:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.609 resv_hugepages=0 00:04:23.609 23:00:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.609 surplus_hugepages=0 00:04:23.609 23:00:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.609 anon_hugepages=0 00:04:23.609 23:00:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.609 23:00:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.609 23:00:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.609 23:00:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.609 23:00:45 -- setup/common.sh@18 -- # local node= 00:04:23.609 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.609 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.609 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.609 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.609 23:00:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.609 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.609 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107808016 kB' 'MemAvailable: 111155064 kB' 'Buffers: 4132 kB' 'Cached: 11816900 kB' 'SwapCached: 0 kB' 'Active: 8905428 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415344 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613564 kB' 'Mapped: 209704 kB' 'Shmem: 7805304 kB' 'KReclaimable: 311304 kB' 'Slab: 1150992 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 839688 kB' 'KernelStack: 27568 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9968568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.609 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.609 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.610 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.610 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.611 23:00:45 -- setup/common.sh@33 -- # echo 1024 00:04:23.611 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.611 23:00:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.611 23:00:45 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.611 23:00:45 -- setup/hugepages.sh@27 -- # local node 00:04:23.611 23:00:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.611 23:00:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.611 23:00:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.611 23:00:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.611 23:00:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.611 23:00:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.611 23:00:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.611 23:00:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.611 23:00:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.611 23:00:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.611 23:00:45 -- setup/common.sh@18 -- # local node=0 00:04:23.611 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.611 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.611 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.611 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.611 23:00:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.611 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.611 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53518764 kB' 'MemUsed: 12140244 kB' 'SwapCached: 0 kB' 'Active: 5533360 kB' 'Inactive: 3325404 kB' 'Active(anon): 5197904 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8654800 kB' 'Mapped: 132200 kB' 'AnonPages: 207300 kB' 'Shmem: 4993940 kB' 'KernelStack: 13592 kB' 'PageTables: 5772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177860 kB' 'Slab: 631624 kB' 'SReclaimable: 177860 kB' 'SUnreclaim: 453764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.611 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.611 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@33 -- # echo 0 00:04:23.612 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.612 23:00:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.612 23:00:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.612 23:00:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.612 23:00:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:23.612 23:00:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.612 23:00:45 -- setup/common.sh@18 -- # local node=1 00:04:23.612 23:00:45 -- setup/common.sh@19 -- # local var val 00:04:23.612 23:00:45 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.612 23:00:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.612 23:00:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:23.612 23:00:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:23.612 23:00:45 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.612 23:00:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 54290168 kB' 'MemUsed: 6389688 kB' 'SwapCached: 0 kB' 'Active: 3369400 kB' 'Inactive: 200240 kB' 'Active(anon): 3214772 kB' 'Inactive(anon): 0 kB' 'Active(file): 154628 kB' 'Inactive(file): 200240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3166276 kB' 'Mapped: 76632 kB' 'AnonPages: 403448 kB' 'Shmem: 2811408 kB' 'KernelStack: 13992 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133444 kB' 'Slab: 519328 kB' 'SReclaimable: 133444 kB' 'SUnreclaim: 385884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.612 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.612 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # continue 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.613 23:00:45 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.613 23:00:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.613 23:00:45 -- setup/common.sh@33 -- # echo 0 00:04:23.613 23:00:45 -- setup/common.sh@33 -- # return 0 00:04:23.613 23:00:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.613 23:00:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.613 23:00:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.613 23:00:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.613 23:00:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.613 node0=512 expecting 512 00:04:23.613 23:00:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.613 23:00:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.614 23:00:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.614 23:00:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:23.614 node1=512 expecting 512 00:04:23.614 23:00:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.614 00:04:23.614 real 0m3.690s 00:04:23.614 user 0m1.414s 00:04:23.614 sys 0m2.333s 00:04:23.614 23:00:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.614 23:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:23.614 ************************************ 00:04:23.614 END TEST per_node_1G_alloc 00:04:23.614 ************************************ 00:04:23.614 23:00:45 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.614 23:00:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.614 23:00:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.614 23:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:23.614 ************************************ 00:04:23.614 START TEST even_2G_alloc 00:04:23.614 ************************************ 00:04:23.614 23:00:45 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:23.614 23:00:45 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.614 23:00:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.614 23:00:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.614 23:00:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.614 23:00:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.614 23:00:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.614 23:00:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.614 23:00:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.614 23:00:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.614 23:00:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.614 23:00:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:23.614 23:00:45 -- setup/hugepages.sh@83 -- # : 512 00:04:23.614 23:00:45 -- setup/hugepages.sh@84 -- # : 1 00:04:23.614 23:00:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:23.614 23:00:45 -- setup/hugepages.sh@83 -- # : 0 00:04:23.614 23:00:45 -- setup/hugepages.sh@84 -- # : 0 00:04:23.614 23:00:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.614 23:00:45 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.614 23:00:45 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.614 23:00:45 -- setup/hugepages.sh@153 -- # setup output 00:04:23.614 23:00:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.614 23:00:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.160 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:26.424 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.424 23:00:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:26.424 23:00:48 -- setup/hugepages.sh@89 -- # local node 00:04:26.424 23:00:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.424 23:00:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.424 23:00:48 -- setup/hugepages.sh@92 -- # local surp 00:04:26.424 23:00:48 -- setup/hugepages.sh@93 -- # local resv 00:04:26.424 23:00:48 -- setup/hugepages.sh@94 -- # local anon 00:04:26.424 23:00:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.424 23:00:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.424 23:00:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.424 23:00:48 -- setup/common.sh@18 -- # local node= 00:04:26.424 23:00:48 -- setup/common.sh@19 -- # local var val 00:04:26.424 23:00:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.424 23:00:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.424 23:00:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.424 23:00:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.424 23:00:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.424 23:00:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107863728 kB' 'MemAvailable: 111210776 kB' 'Buffers: 4132 kB' 'Cached: 11817028 kB' 'SwapCached: 0 kB' 'Active: 8899932 kB' 'Inactive: 3525644 kB' 'Active(anon): 8409848 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607544 kB' 'Mapped: 208572 kB' 'Shmem: 7805432 kB' 'KReclaimable: 311304 kB' 'Slab: 1149360 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 838056 kB' 'KernelStack: 27456 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9954592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.424 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.424 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:48 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.425 23:00:49 -- setup/common.sh@33 -- # echo 0 00:04:26.425 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.425 23:00:49 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.425 23:00:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.425 23:00:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.425 23:00:49 -- setup/common.sh@18 -- # local node= 00:04:26.425 23:00:49 -- setup/common.sh@19 -- # local var val 00:04:26.425 23:00:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.425 23:00:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.425 23:00:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.425 23:00:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.425 23:00:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.425 23:00:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107864724 kB' 'MemAvailable: 111211772 kB' 'Buffers: 4132 kB' 'Cached: 11817028 kB' 'SwapCached: 0 kB' 'Active: 8900020 kB' 'Inactive: 3525644 kB' 'Active(anon): 8409936 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607740 kB' 'Mapped: 208524 kB' 'Shmem: 7805432 kB' 'KReclaimable: 311304 kB' 'Slab: 1149420 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 838116 kB' 'KernelStack: 27552 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9954604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.425 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.425 23:00:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.426 23:00:49 -- setup/common.sh@33 -- # echo 0 00:04:26.426 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.426 23:00:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.426 23:00:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.426 23:00:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.426 23:00:49 -- setup/common.sh@18 -- # local node= 00:04:26.426 23:00:49 -- setup/common.sh@19 -- # local var val 00:04:26.426 23:00:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.426 23:00:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.426 23:00:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.426 23:00:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.426 23:00:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.426 23:00:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107864000 kB' 'MemAvailable: 111211048 kB' 'Buffers: 4132 kB' 'Cached: 11817040 kB' 'SwapCached: 0 kB' 'Active: 8900368 kB' 'Inactive: 3525644 kB' 'Active(anon): 8410284 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607936 kB' 'Mapped: 208532 kB' 'Shmem: 7805444 kB' 'KReclaimable: 311304 kB' 'Slab: 1149464 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 838160 kB' 'KernelStack: 27696 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9954616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.426 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.426 23:00:49 -- setup/common.sh@33 -- # echo 0 00:04:26.426 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.426 23:00:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.426 23:00:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.426 nr_hugepages=1024 00:04:26.426 23:00:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.426 resv_hugepages=0 00:04:26.426 23:00:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.426 surplus_hugepages=0 00:04:26.426 23:00:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.426 anon_hugepages=0 00:04:26.426 23:00:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.426 23:00:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.426 23:00:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.426 23:00:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.426 23:00:49 -- setup/common.sh@18 -- # local node= 00:04:26.426 23:00:49 -- setup/common.sh@19 -- # local var val 00:04:26.426 23:00:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.426 23:00:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.426 23:00:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.426 23:00:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.426 23:00:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.426 23:00:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.426 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107863996 kB' 'MemAvailable: 111211044 kB' 'Buffers: 4132 kB' 'Cached: 11817040 kB' 'SwapCached: 0 kB' 'Active: 8899936 kB' 'Inactive: 3525644 kB' 'Active(anon): 8409852 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608136 kB' 'Mapped: 208532 kB' 'Shmem: 7805444 kB' 'KReclaimable: 311304 kB' 'Slab: 1149464 kB' 'SReclaimable: 311304 kB' 'SUnreclaim: 838160 kB' 'KernelStack: 27696 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9954632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.427 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.427 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.427 23:00:49 -- setup/common.sh@33 -- # echo 1024 00:04:26.427 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.427 23:00:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.427 23:00:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.427 23:00:49 -- setup/hugepages.sh@27 -- # local node 00:04:26.427 23:00:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.427 23:00:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.427 23:00:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.427 23:00:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.427 23:00:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.427 23:00:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.427 23:00:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.689 23:00:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.689 23:00:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.689 23:00:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.689 23:00:49 -- setup/common.sh@18 -- # local node=0 00:04:26.689 23:00:49 -- setup/common.sh@19 -- # local var val 00:04:26.689 23:00:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.689 23:00:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.689 23:00:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.689 23:00:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.689 23:00:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.689 23:00:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.689 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.689 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53567692 kB' 'MemUsed: 12091316 kB' 'SwapCached: 0 kB' 'Active: 5529420 kB' 'Inactive: 3325404 kB' 'Active(anon): 5193964 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8654860 kB' 'Mapped: 131876 kB' 'AnonPages: 203180 kB' 'Shmem: 4994000 kB' 'KernelStack: 13480 kB' 'PageTables: 5720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177860 kB' 'Slab: 630524 kB' 'SReclaimable: 177860 kB' 'SUnreclaim: 452664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.690 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.690 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@33 -- # echo 0 00:04:26.691 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.691 23:00:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.691 23:00:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.691 23:00:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.691 23:00:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.691 23:00:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.691 23:00:49 -- setup/common.sh@18 -- # local node=1 00:04:26.691 23:00:49 -- setup/common.sh@19 -- # local var val 00:04:26.691 23:00:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.691 23:00:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.691 23:00:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.691 23:00:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.691 23:00:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.691 23:00:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 54296688 kB' 'MemUsed: 6383168 kB' 'SwapCached: 0 kB' 'Active: 3370212 kB' 'Inactive: 200240 kB' 'Active(anon): 3215584 kB' 'Inactive(anon): 0 kB' 'Active(file): 154628 kB' 'Inactive(file): 200240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3166340 kB' 'Mapped: 76648 kB' 'AnonPages: 404164 kB' 'Shmem: 2811472 kB' 'KernelStack: 14088 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133444 kB' 'Slab: 518940 kB' 'SReclaimable: 133444 kB' 'SUnreclaim: 385496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.691 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.691 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # continue 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.692 23:00:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.692 23:00:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.692 23:00:49 -- setup/common.sh@33 -- # echo 0 00:04:26.692 23:00:49 -- setup/common.sh@33 -- # return 0 00:04:26.692 23:00:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.692 23:00:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.692 23:00:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.692 23:00:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.692 node0=512 expecting 512 00:04:26.692 23:00:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.692 23:00:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.692 23:00:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.692 23:00:49 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.692 node1=512 expecting 512 00:04:26.692 23:00:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.692 00:04:26.692 real 0m3.407s 00:04:26.692 user 0m1.242s 00:04:26.692 sys 0m2.171s 00:04:26.692 23:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.692 23:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:26.692 ************************************ 00:04:26.692 END TEST even_2G_alloc 00:04:26.692 ************************************ 00:04:26.692 23:00:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:26.692 23:00:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.692 23:00:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.692 23:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:26.692 ************************************ 00:04:26.692 START TEST odd_alloc 00:04:26.692 ************************************ 00:04:26.692 23:00:49 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:26.692 23:00:49 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:26.692 23:00:49 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:26.692 23:00:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:26.692 23:00:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.692 23:00:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.692 23:00:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.692 23:00:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:26.692 23:00:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.692 23:00:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.692 23:00:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.692 23:00:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.692 23:00:49 -- setup/hugepages.sh@83 -- # : 513 00:04:26.692 23:00:49 -- setup/hugepages.sh@84 -- # : 1 00:04:26.692 23:00:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:26.692 23:00:49 -- setup/hugepages.sh@83 -- # : 0 00:04:26.692 23:00:49 -- setup/hugepages.sh@84 -- # : 0 00:04:26.692 23:00:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.692 23:00:49 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:26.692 23:00:49 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:26.692 23:00:49 -- setup/hugepages.sh@160 -- # setup output 00:04:26.692 23:00:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.692 23:00:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.991 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:29.991 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:29.991 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.256 23:00:52 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:30.256 23:00:52 -- setup/hugepages.sh@89 -- # local node 00:04:30.256 23:00:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.256 23:00:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.256 23:00:52 -- setup/hugepages.sh@92 -- # local surp 00:04:30.256 23:00:52 -- setup/hugepages.sh@93 -- # local resv 00:04:30.256 23:00:52 -- setup/hugepages.sh@94 -- # local anon 00:04:30.256 23:00:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.256 23:00:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.256 23:00:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.256 23:00:52 -- setup/common.sh@18 -- # local node= 00:04:30.256 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.256 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.256 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.256 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.256 23:00:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.256 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.256 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107867584 kB' 'MemAvailable: 111214600 kB' 'Buffers: 4132 kB' 'Cached: 11817176 kB' 'SwapCached: 0 kB' 'Active: 8900852 kB' 'Inactive: 3525644 kB' 'Active(anon): 8410768 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608412 kB' 'Mapped: 208580 kB' 'Shmem: 7805580 kB' 'KReclaimable: 311240 kB' 'Slab: 1149312 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838072 kB' 'KernelStack: 27408 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9950504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235724 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.256 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.256 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.257 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.257 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.257 23:00:52 -- setup/common.sh@33 -- # echo 0 00:04:30.257 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.257 23:00:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.258 23:00:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.258 23:00:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.258 23:00:52 -- setup/common.sh@18 -- # local node= 00:04:30.258 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.258 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.258 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.258 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.258 23:00:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.258 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.258 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107869928 kB' 'MemAvailable: 111216944 kB' 'Buffers: 4132 kB' 'Cached: 11817176 kB' 'SwapCached: 0 kB' 'Active: 8902232 kB' 'Inactive: 3525644 kB' 'Active(anon): 8412148 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609888 kB' 'Mapped: 209268 kB' 'Shmem: 7805580 kB' 'KReclaimable: 311240 kB' 'Slab: 1149312 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838072 kB' 'KernelStack: 27440 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9952004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235644 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.258 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.258 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.259 23:00:52 -- setup/common.sh@33 -- # echo 0 00:04:30.259 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.259 23:00:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.259 23:00:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.259 23:00:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.259 23:00:52 -- setup/common.sh@18 -- # local node= 00:04:30.259 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.259 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.259 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.259 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.259 23:00:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.259 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.259 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107870780 kB' 'MemAvailable: 111217796 kB' 'Buffers: 4132 kB' 'Cached: 11817188 kB' 'SwapCached: 0 kB' 'Active: 8906048 kB' 'Inactive: 3525644 kB' 'Active(anon): 8415964 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614172 kB' 'Mapped: 209220 kB' 'Shmem: 7805592 kB' 'KReclaimable: 311240 kB' 'Slab: 1149360 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838120 kB' 'KernelStack: 27408 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9956652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235648 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.259 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.259 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.260 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.260 23:00:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.261 23:00:52 -- setup/common.sh@33 -- # echo 0 00:04:30.261 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.261 23:00:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.261 23:00:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:30.261 nr_hugepages=1025 00:04:30.261 23:00:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.261 resv_hugepages=0 00:04:30.261 23:00:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.261 surplus_hugepages=0 00:04:30.261 23:00:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.261 anon_hugepages=0 00:04:30.261 23:00:52 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:30.261 23:00:52 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:30.261 23:00:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.261 23:00:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.261 23:00:52 -- setup/common.sh@18 -- # local node= 00:04:30.261 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.261 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.261 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.261 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.261 23:00:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.261 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.261 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107871108 kB' 'MemAvailable: 111218124 kB' 'Buffers: 4132 kB' 'Cached: 11817204 kB' 'SwapCached: 0 kB' 'Active: 8906620 kB' 'Inactive: 3525644 kB' 'Active(anon): 8416536 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614232 kB' 'Mapped: 209384 kB' 'Shmem: 7805608 kB' 'KReclaimable: 311240 kB' 'Slab: 1149360 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838120 kB' 'KernelStack: 27408 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 9956664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235648 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.261 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.261 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.262 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.262 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.262 23:00:52 -- setup/common.sh@33 -- # echo 1025 00:04:30.262 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.262 23:00:52 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:30.262 23:00:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.262 23:00:52 -- setup/hugepages.sh@27 -- # local node 00:04:30.262 23:00:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.262 23:00:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.262 23:00:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.262 23:00:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:30.262 23:00:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.262 23:00:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.263 23:00:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.263 23:00:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.263 23:00:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.263 23:00:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.263 23:00:52 -- setup/common.sh@18 -- # local node=0 00:04:30.263 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.263 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.263 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.263 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.263 23:00:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.263 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.263 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53574564 kB' 'MemUsed: 12084444 kB' 'SwapCached: 0 kB' 'Active: 5529936 kB' 'Inactive: 3325404 kB' 'Active(anon): 5194480 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8654952 kB' 'Mapped: 131900 kB' 'AnonPages: 203612 kB' 'Shmem: 4994092 kB' 'KernelStack: 13384 kB' 'PageTables: 5408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177796 kB' 'Slab: 630560 kB' 'SReclaimable: 177796 kB' 'SUnreclaim: 452764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.263 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.263 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@33 -- # echo 0 00:04:30.264 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.264 23:00:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.264 23:00:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.264 23:00:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.264 23:00:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:30.264 23:00:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.264 23:00:52 -- setup/common.sh@18 -- # local node=1 00:04:30.264 23:00:52 -- setup/common.sh@19 -- # local var val 00:04:30.264 23:00:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.264 23:00:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.264 23:00:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:30.264 23:00:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:30.264 23:00:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.264 23:00:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 54295788 kB' 'MemUsed: 6384068 kB' 'SwapCached: 0 kB' 'Active: 3371000 kB' 'Inactive: 200240 kB' 'Active(anon): 3216372 kB' 'Inactive(anon): 0 kB' 'Active(file): 154628 kB' 'Inactive(file): 200240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3166400 kB' 'Mapped: 76664 kB' 'AnonPages: 404984 kB' 'Shmem: 2811532 kB' 'KernelStack: 14024 kB' 'PageTables: 3424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133444 kB' 'Slab: 518800 kB' 'SReclaimable: 133444 kB' 'SUnreclaim: 385356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.264 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.264 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # continue 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.265 23:00:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.265 23:00:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.265 23:00:52 -- setup/common.sh@33 -- # echo 0 00:04:30.265 23:00:52 -- setup/common.sh@33 -- # return 0 00:04:30.265 23:00:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.265 23:00:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.265 23:00:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.265 23:00:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.265 23:00:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:30.265 node0=512 expecting 513 00:04:30.265 23:00:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.265 23:00:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.265 23:00:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.265 23:00:52 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:30.265 node1=513 expecting 512 00:04:30.265 23:00:52 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:30.265 00:04:30.265 real 0m3.705s 00:04:30.265 user 0m1.488s 00:04:30.265 sys 0m2.285s 00:04:30.265 23:00:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.265 23:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.265 ************************************ 00:04:30.265 END TEST odd_alloc 00:04:30.265 ************************************ 00:04:30.527 23:00:52 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:30.527 23:00:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.527 23:00:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.527 23:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.527 ************************************ 00:04:30.527 START TEST custom_alloc 00:04:30.527 ************************************ 00:04:30.527 23:00:52 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:30.527 23:00:52 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:30.527 23:00:52 -- setup/hugepages.sh@169 -- # local node 00:04:30.527 23:00:52 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:30.527 23:00:52 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:30.527 23:00:52 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:30.527 23:00:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.527 23:00:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.527 23:00:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.527 23:00:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.527 23:00:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.527 23:00:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:30.527 23:00:52 -- setup/hugepages.sh@83 -- # : 256 00:04:30.527 23:00:52 -- setup/hugepages.sh@84 -- # : 1 00:04:30.527 23:00:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:30.527 23:00:52 -- setup/hugepages.sh@83 -- # : 0 00:04:30.527 23:00:52 -- setup/hugepages.sh@84 -- # : 0 00:04:30.527 23:00:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:30.527 23:00:52 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:30.527 23:00:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.527 23:00:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.527 23:00:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.527 23:00:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.527 23:00:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.527 23:00:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:30.527 23:00:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:30.527 23:00:52 -- setup/hugepages.sh@78 -- # return 0 00:04:30.527 23:00:52 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:30.527 23:00:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:30.527 23:00:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:30.527 23:00:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:30.527 23:00:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:30.527 23:00:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.527 23:00:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.527 23:00:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.527 23:00:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.527 23:00:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:30.527 23:00:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:30.527 23:00:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:30.527 23:00:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:30.527 23:00:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:30.527 23:00:52 -- setup/hugepages.sh@78 -- # return 0 00:04:30.527 23:00:52 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:30.527 23:00:52 -- setup/hugepages.sh@187 -- # setup output 00:04:30.527 23:00:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.527 23:00:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.830 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:33.830 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:33.830 23:00:56 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:33.830 23:00:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:33.830 23:00:56 -- setup/hugepages.sh@89 -- # local node 00:04:33.830 23:00:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.830 23:00:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.830 23:00:56 -- setup/hugepages.sh@92 -- # local surp 00:04:33.830 23:00:56 -- setup/hugepages.sh@93 -- # local resv 00:04:33.830 23:00:56 -- setup/hugepages.sh@94 -- # local anon 00:04:33.830 23:00:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.830 23:00:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.830 23:00:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.830 23:00:56 -- setup/common.sh@18 -- # local node= 00:04:33.830 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:33.830 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.830 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.830 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.830 23:00:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.830 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.830 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 106836324 kB' 'MemAvailable: 110183340 kB' 'Buffers: 4132 kB' 'Cached: 11817320 kB' 'SwapCached: 0 kB' 'Active: 8903424 kB' 'Inactive: 3525644 kB' 'Active(anon): 8413340 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610484 kB' 'Mapped: 208636 kB' 'Shmem: 7805724 kB' 'KReclaimable: 311240 kB' 'Slab: 1150024 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838784 kB' 'KernelStack: 27536 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9951596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.830 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.830 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # continue 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.831 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.831 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.831 23:00:56 -- setup/common.sh@33 -- # echo 0 00:04:33.831 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:33.831 23:00:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.831 23:00:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.831 23:00:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.831 23:00:56 -- setup/common.sh@18 -- # local node= 00:04:33.831 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:33.831 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.831 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.097 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.097 23:00:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.097 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.097 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 106836912 kB' 'MemAvailable: 110183928 kB' 'Buffers: 4132 kB' 'Cached: 11817320 kB' 'SwapCached: 0 kB' 'Active: 8901768 kB' 'Inactive: 3525644 kB' 'Active(anon): 8411684 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609336 kB' 'Mapped: 208556 kB' 'Shmem: 7805724 kB' 'KReclaimable: 311240 kB' 'Slab: 1150008 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838768 kB' 'KernelStack: 27408 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9951608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.097 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.097 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.098 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.098 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.099 23:00:56 -- setup/common.sh@33 -- # echo 0 00:04:34.099 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:34.099 23:00:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.099 23:00:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.099 23:00:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.099 23:00:56 -- setup/common.sh@18 -- # local node= 00:04:34.099 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:34.099 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.099 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.099 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.099 23:00:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.099 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.099 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 106837516 kB' 'MemAvailable: 110184532 kB' 'Buffers: 4132 kB' 'Cached: 11817332 kB' 'SwapCached: 0 kB' 'Active: 8901792 kB' 'Inactive: 3525644 kB' 'Active(anon): 8411708 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609336 kB' 'Mapped: 208556 kB' 'Shmem: 7805736 kB' 'KReclaimable: 311240 kB' 'Slab: 1150008 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838768 kB' 'KernelStack: 27408 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9951620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.099 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.099 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.100 23:00:56 -- setup/common.sh@33 -- # echo 0 00:04:34.100 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:34.100 23:00:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.100 23:00:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:34.100 nr_hugepages=1536 00:04:34.100 23:00:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.100 resv_hugepages=0 00:04:34.100 23:00:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.100 surplus_hugepages=0 00:04:34.100 23:00:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.100 anon_hugepages=0 00:04:34.100 23:00:56 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:34.100 23:00:56 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:34.100 23:00:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.100 23:00:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.100 23:00:56 -- setup/common.sh@18 -- # local node= 00:04:34.100 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:34.100 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.100 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.100 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.100 23:00:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.100 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.100 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 106837768 kB' 'MemAvailable: 110184784 kB' 'Buffers: 4132 kB' 'Cached: 11817348 kB' 'SwapCached: 0 kB' 'Active: 8901816 kB' 'Inactive: 3525644 kB' 'Active(anon): 8411732 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609340 kB' 'Mapped: 208556 kB' 'Shmem: 7805752 kB' 'KReclaimable: 311240 kB' 'Slab: 1150008 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838768 kB' 'KernelStack: 27408 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 9951636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.100 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.100 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.101 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.101 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.102 23:00:56 -- setup/common.sh@33 -- # echo 1536 00:04:34.102 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:34.102 23:00:56 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:34.102 23:00:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.102 23:00:56 -- setup/hugepages.sh@27 -- # local node 00:04:34.102 23:00:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.102 23:00:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.102 23:00:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.102 23:00:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.102 23:00:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.102 23:00:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.102 23:00:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.102 23:00:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.102 23:00:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.102 23:00:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.102 23:00:56 -- setup/common.sh@18 -- # local node=0 00:04:34.102 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:34.102 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.102 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.102 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.102 23:00:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.102 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.102 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53593032 kB' 'MemUsed: 12065976 kB' 'SwapCached: 0 kB' 'Active: 5529692 kB' 'Inactive: 3325404 kB' 'Active(anon): 5194236 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8655044 kB' 'Mapped: 131880 kB' 'AnonPages: 203180 kB' 'Shmem: 4994184 kB' 'KernelStack: 13352 kB' 'PageTables: 5364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177796 kB' 'Slab: 630816 kB' 'SReclaimable: 177796 kB' 'SUnreclaim: 453020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.102 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.102 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@33 -- # echo 0 00:04:34.103 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:34.103 23:00:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.103 23:00:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.103 23:00:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.103 23:00:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.103 23:00:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.103 23:00:56 -- setup/common.sh@18 -- # local node=1 00:04:34.103 23:00:56 -- setup/common.sh@19 -- # local var val 00:04:34.103 23:00:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.103 23:00:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.103 23:00:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.103 23:00:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.103 23:00:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.103 23:00:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 53244736 kB' 'MemUsed: 7435120 kB' 'SwapCached: 0 kB' 'Active: 3371784 kB' 'Inactive: 200240 kB' 'Active(anon): 3217156 kB' 'Inactive(anon): 0 kB' 'Active(file): 154628 kB' 'Inactive(file): 200240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3166460 kB' 'Mapped: 76676 kB' 'AnonPages: 405768 kB' 'Shmem: 2811592 kB' 'KernelStack: 14040 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133444 kB' 'Slab: 519192 kB' 'SReclaimable: 133444 kB' 'SUnreclaim: 385748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.103 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.103 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # continue 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.104 23:00:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.104 23:00:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.104 23:00:56 -- setup/common.sh@33 -- # echo 0 00:04:34.104 23:00:56 -- setup/common.sh@33 -- # return 0 00:04:34.104 23:00:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.104 23:00:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.104 23:00:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.104 23:00:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.104 23:00:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.104 node0=512 expecting 512 00:04:34.104 23:00:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.104 23:00:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.104 23:00:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.104 23:00:56 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:34.104 node1=1024 expecting 1024 00:04:34.104 23:00:56 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:34.104 00:04:34.104 real 0m3.722s 00:04:34.104 user 0m1.526s 00:04:34.104 sys 0m2.262s 00:04:34.104 23:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.104 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:34.104 ************************************ 00:04:34.104 END TEST custom_alloc 00:04:34.104 ************************************ 00:04:34.104 23:00:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:34.104 23:00:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.104 23:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.104 23:00:56 -- common/autotest_common.sh@10 -- # set +x 00:04:34.104 ************************************ 00:04:34.104 START TEST no_shrink_alloc 00:04:34.104 ************************************ 00:04:34.104 23:00:56 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:34.104 23:00:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:34.104 23:00:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.104 23:00:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.104 23:00:56 -- setup/hugepages.sh@51 -- # shift 00:04:34.104 23:00:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.104 23:00:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.104 23:00:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.104 23:00:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.104 23:00:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.104 23:00:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.104 23:00:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.104 23:00:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.104 23:00:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.104 23:00:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.104 23:00:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.104 23:00:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.104 23:00:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.104 23:00:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.104 23:00:56 -- setup/hugepages.sh@73 -- # return 0 00:04:34.104 23:00:56 -- setup/hugepages.sh@198 -- # setup output 00:04:34.104 23:00:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.104 23:00:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.315 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:38.315 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:38.315 23:01:00 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:38.315 23:01:00 -- setup/hugepages.sh@89 -- # local node 00:04:38.315 23:01:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.315 23:01:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.315 23:01:00 -- setup/hugepages.sh@92 -- # local surp 00:04:38.315 23:01:00 -- setup/hugepages.sh@93 -- # local resv 00:04:38.315 23:01:00 -- setup/hugepages.sh@94 -- # local anon 00:04:38.315 23:01:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.315 23:01:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.315 23:01:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.315 23:01:00 -- setup/common.sh@18 -- # local node= 00:04:38.315 23:01:00 -- setup/common.sh@19 -- # local var val 00:04:38.315 23:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.315 23:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.315 23:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.315 23:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.315 23:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.315 23:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.315 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.315 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107890544 kB' 'MemAvailable: 111237560 kB' 'Buffers: 4132 kB' 'Cached: 11817468 kB' 'SwapCached: 0 kB' 'Active: 8904768 kB' 'Inactive: 3525644 kB' 'Active(anon): 8414684 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612760 kB' 'Mapped: 208684 kB' 'Shmem: 7805872 kB' 'KReclaimable: 311240 kB' 'Slab: 1150008 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838768 kB' 'KernelStack: 27440 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9952528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.316 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.316 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.317 23:01:00 -- setup/common.sh@33 -- # echo 0 00:04:38.317 23:01:00 -- setup/common.sh@33 -- # return 0 00:04:38.317 23:01:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:38.317 23:01:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.317 23:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.317 23:01:00 -- setup/common.sh@18 -- # local node= 00:04:38.317 23:01:00 -- setup/common.sh@19 -- # local var val 00:04:38.317 23:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.317 23:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.317 23:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.317 23:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.317 23:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.317 23:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107890696 kB' 'MemAvailable: 111237712 kB' 'Buffers: 4132 kB' 'Cached: 11817472 kB' 'SwapCached: 0 kB' 'Active: 8904496 kB' 'Inactive: 3525644 kB' 'Active(anon): 8414412 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612444 kB' 'Mapped: 208648 kB' 'Shmem: 7805876 kB' 'KReclaimable: 311240 kB' 'Slab: 1150008 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838768 kB' 'KernelStack: 27424 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9952540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235596 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.317 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.317 23:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.318 23:01:00 -- setup/common.sh@33 -- # echo 0 00:04:38.318 23:01:00 -- setup/common.sh@33 -- # return 0 00:04:38.318 23:01:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:38.318 23:01:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.318 23:01:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.318 23:01:00 -- setup/common.sh@18 -- # local node= 00:04:38.318 23:01:00 -- setup/common.sh@19 -- # local var val 00:04:38.318 23:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.318 23:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.318 23:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.318 23:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.318 23:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.318 23:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107891172 kB' 'MemAvailable: 111238188 kB' 'Buffers: 4132 kB' 'Cached: 11817480 kB' 'SwapCached: 0 kB' 'Active: 8904736 kB' 'Inactive: 3525644 kB' 'Active(anon): 8414652 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612732 kB' 'Mapped: 208568 kB' 'Shmem: 7805884 kB' 'KReclaimable: 311240 kB' 'Slab: 1150000 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838760 kB' 'KernelStack: 27424 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9955820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235564 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.318 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.318 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.319 23:01:00 -- setup/common.sh@33 -- # echo 0 00:04:38.319 23:01:00 -- setup/common.sh@33 -- # return 0 00:04:38.319 23:01:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:38.319 23:01:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.319 nr_hugepages=1024 00:04:38.319 23:01:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.319 resv_hugepages=0 00:04:38.319 23:01:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.319 surplus_hugepages=0 00:04:38.319 23:01:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.319 anon_hugepages=0 00:04:38.319 23:01:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.319 23:01:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.319 23:01:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.319 23:01:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.319 23:01:00 -- setup/common.sh@18 -- # local node= 00:04:38.319 23:01:00 -- setup/common.sh@19 -- # local var val 00:04:38.319 23:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.319 23:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.319 23:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.319 23:01:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.319 23:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.319 23:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107892700 kB' 'MemAvailable: 111239716 kB' 'Buffers: 4132 kB' 'Cached: 11817492 kB' 'SwapCached: 0 kB' 'Active: 8904844 kB' 'Inactive: 3525644 kB' 'Active(anon): 8414760 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612688 kB' 'Mapped: 208568 kB' 'Shmem: 7805896 kB' 'KReclaimable: 311240 kB' 'Slab: 1149996 kB' 'SReclaimable: 311240 kB' 'SUnreclaim: 838756 kB' 'KernelStack: 27504 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9956060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235580 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.319 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.319 23:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.320 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.320 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.320 23:01:00 -- setup/common.sh@33 -- # echo 1024 00:04:38.320 23:01:00 -- setup/common.sh@33 -- # return 0 00:04:38.320 23:01:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.320 23:01:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.320 23:01:00 -- setup/hugepages.sh@27 -- # local node 00:04:38.320 23:01:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.320 23:01:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.320 23:01:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.320 23:01:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.320 23:01:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.321 23:01:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.321 23:01:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.321 23:01:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.321 23:01:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.321 23:01:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.321 23:01:00 -- setup/common.sh@18 -- # local node=0 00:04:38.321 23:01:00 -- setup/common.sh@19 -- # local var val 00:04:38.321 23:01:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.321 23:01:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.321 23:01:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.321 23:01:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.321 23:01:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.321 23:01:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52541848 kB' 'MemUsed: 13117160 kB' 'SwapCached: 0 kB' 'Active: 5533312 kB' 'Inactive: 3325404 kB' 'Active(anon): 5197856 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8655144 kB' 'Mapped: 131880 kB' 'AnonPages: 207196 kB' 'Shmem: 4994284 kB' 'KernelStack: 13416 kB' 'PageTables: 5472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177796 kB' 'Slab: 630984 kB' 'SReclaimable: 177796 kB' 'SUnreclaim: 453188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # continue 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.321 23:01:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.321 23:01:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.321 23:01:00 -- setup/common.sh@33 -- # echo 0 00:04:38.321 23:01:00 -- setup/common.sh@33 -- # return 0 00:04:38.321 23:01:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.321 23:01:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.321 23:01:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.321 23:01:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.321 23:01:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.321 node0=1024 expecting 1024 00:04:38.321 23:01:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.321 23:01:00 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:38.321 23:01:00 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:38.321 23:01:00 -- setup/hugepages.sh@202 -- # setup output 00:04:38.321 23:01:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.321 23:01:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.624 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:41.624 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:41.624 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:41.624 23:01:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:41.624 23:01:03 -- setup/hugepages.sh@89 -- # local node 00:04:41.624 23:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.624 23:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.624 23:01:03 -- setup/hugepages.sh@92 -- # local surp 00:04:41.624 23:01:03 -- setup/hugepages.sh@93 -- # local resv 00:04:41.624 23:01:03 -- setup/hugepages.sh@94 -- # local anon 00:04:41.624 23:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.624 23:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.624 23:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.624 23:01:03 -- setup/common.sh@18 -- # local node= 00:04:41.624 23:01:03 -- setup/common.sh@19 -- # local var val 00:04:41.624 23:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.624 23:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.624 23:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.624 23:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.624 23:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.624 23:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107895204 kB' 'MemAvailable: 111242200 kB' 'Buffers: 4132 kB' 'Cached: 11817600 kB' 'SwapCached: 0 kB' 'Active: 8906544 kB' 'Inactive: 3525644 kB' 'Active(anon): 8416460 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613840 kB' 'Mapped: 208728 kB' 'Shmem: 7806004 kB' 'KReclaimable: 311200 kB' 'Slab: 1149572 kB' 'SReclaimable: 311200 kB' 'SUnreclaim: 838372 kB' 'KernelStack: 27520 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9957976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.624 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.624 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.625 23:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.625 23:01:03 -- setup/common.sh@33 -- # echo 0 00:04:41.625 23:01:03 -- setup/common.sh@33 -- # return 0 00:04:41.625 23:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.625 23:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.625 23:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.625 23:01:03 -- setup/common.sh@18 -- # local node= 00:04:41.625 23:01:03 -- setup/common.sh@19 -- # local var val 00:04:41.625 23:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.625 23:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.625 23:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.625 23:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.625 23:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.625 23:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.625 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107898484 kB' 'MemAvailable: 111245480 kB' 'Buffers: 4132 kB' 'Cached: 11817604 kB' 'SwapCached: 0 kB' 'Active: 8907524 kB' 'Inactive: 3525644 kB' 'Active(anon): 8417440 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614404 kB' 'Mapped: 208696 kB' 'Shmem: 7806008 kB' 'KReclaimable: 311200 kB' 'Slab: 1149480 kB' 'SReclaimable: 311200 kB' 'SUnreclaim: 838280 kB' 'KernelStack: 27568 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9958224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.626 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.626 23:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:03 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.627 23:01:04 -- setup/common.sh@33 -- # echo 0 00:04:41.627 23:01:04 -- setup/common.sh@33 -- # return 0 00:04:41.627 23:01:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.627 23:01:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.627 23:01:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.627 23:01:04 -- setup/common.sh@18 -- # local node= 00:04:41.627 23:01:04 -- setup/common.sh@19 -- # local var val 00:04:41.627 23:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.627 23:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.627 23:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.627 23:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.627 23:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.627 23:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107901696 kB' 'MemAvailable: 111248692 kB' 'Buffers: 4132 kB' 'Cached: 11817612 kB' 'SwapCached: 0 kB' 'Active: 8906600 kB' 'Inactive: 3525644 kB' 'Active(anon): 8416516 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613872 kB' 'Mapped: 208520 kB' 'Shmem: 7806016 kB' 'KReclaimable: 311200 kB' 'Slab: 1149456 kB' 'SReclaimable: 311200 kB' 'SUnreclaim: 838256 kB' 'KernelStack: 27600 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9958240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.627 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.627 23:01:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.628 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.628 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.628 23:01:04 -- setup/common.sh@33 -- # echo 0 00:04:41.628 23:01:04 -- setup/common.sh@33 -- # return 0 00:04:41.628 23:01:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.628 23:01:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.628 nr_hugepages=1024 00:04:41.628 23:01:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.628 resv_hugepages=0 00:04:41.628 23:01:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.628 surplus_hugepages=0 00:04:41.628 23:01:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.628 anon_hugepages=0 00:04:41.628 23:01:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.628 23:01:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.628 23:01:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.628 23:01:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.628 23:01:04 -- setup/common.sh@18 -- # local node= 00:04:41.628 23:01:04 -- setup/common.sh@19 -- # local var val 00:04:41.629 23:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.629 23:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.629 23:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.629 23:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.629 23:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.629 23:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.629 23:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107905196 kB' 'MemAvailable: 111252192 kB' 'Buffers: 4132 kB' 'Cached: 11817612 kB' 'SwapCached: 0 kB' 'Active: 8906904 kB' 'Inactive: 3525644 kB' 'Active(anon): 8416820 kB' 'Inactive(anon): 0 kB' 'Active(file): 490084 kB' 'Inactive(file): 3525644 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 614288 kB' 'Mapped: 208520 kB' 'Shmem: 7806016 kB' 'KReclaimable: 311200 kB' 'Slab: 1149456 kB' 'SReclaimable: 311200 kB' 'SUnreclaim: 838256 kB' 'KernelStack: 27552 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 9956620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 111168 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3982708 kB' 'DirectMap2M: 41834496 kB' 'DirectMap1G: 90177536 kB' 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.629 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.629 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.630 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.630 23:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.631 23:01:04 -- setup/common.sh@33 -- # echo 1024 00:04:41.631 23:01:04 -- setup/common.sh@33 -- # return 0 00:04:41.631 23:01:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.631 23:01:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.631 23:01:04 -- setup/hugepages.sh@27 -- # local node 00:04:41.631 23:01:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.631 23:01:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.631 23:01:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.631 23:01:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.631 23:01:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.631 23:01:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.631 23:01:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.631 23:01:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.631 23:01:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.631 23:01:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.631 23:01:04 -- setup/common.sh@18 -- # local node=0 00:04:41.631 23:01:04 -- setup/common.sh@19 -- # local var val 00:04:41.631 23:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.631 23:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.631 23:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.631 23:01:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.631 23:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.631 23:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52547932 kB' 'MemUsed: 13111076 kB' 'SwapCached: 0 kB' 'Active: 5534372 kB' 'Inactive: 3325404 kB' 'Active(anon): 5198916 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8655248 kB' 'Mapped: 131828 kB' 'AnonPages: 207788 kB' 'Shmem: 4994388 kB' 'KernelStack: 13416 kB' 'PageTables: 5568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177756 kB' 'Slab: 630556 kB' 'SReclaimable: 177756 kB' 'SUnreclaim: 452800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.631 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.631 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # continue 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.632 23:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.632 23:01:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.632 23:01:04 -- setup/common.sh@33 -- # echo 0 00:04:41.632 23:01:04 -- setup/common.sh@33 -- # return 0 00:04:41.632 23:01:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.632 23:01:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.632 23:01:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.632 23:01:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.632 23:01:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.632 node0=1024 expecting 1024 00:04:41.632 23:01:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.632 00:04:41.632 real 0m7.373s 00:04:41.632 user 0m2.947s 00:04:41.632 sys 0m4.544s 00:04:41.632 23:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.632 23:01:04 -- common/autotest_common.sh@10 -- # set +x 00:04:41.632 ************************************ 00:04:41.632 END TEST no_shrink_alloc 00:04:41.632 ************************************ 00:04:41.632 23:01:04 -- setup/hugepages.sh@217 -- # clear_hp 00:04:41.632 23:01:04 -- setup/hugepages.sh@37 -- # local node hp 00:04:41.632 23:01:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.632 23:01:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.632 23:01:04 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.632 23:01:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.632 23:01:04 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.632 23:01:04 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.632 23:01:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.632 23:01:04 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.632 23:01:04 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.632 23:01:04 -- setup/hugepages.sh@41 -- # echo 0 00:04:41.632 23:01:04 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:41.632 23:01:04 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:41.632 00:04:41.632 real 0m25.960s 00:04:41.632 user 0m10.151s 00:04:41.632 sys 0m16.110s 00:04:41.632 23:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.632 23:01:04 -- common/autotest_common.sh@10 -- # set +x 00:04:41.632 ************************************ 00:04:41.632 END TEST hugepages 00:04:41.632 ************************************ 00:04:41.632 23:01:04 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:41.632 23:01:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.632 23:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.632 23:01:04 -- common/autotest_common.sh@10 -- # set +x 00:04:41.632 ************************************ 00:04:41.632 START TEST driver 00:04:41.632 ************************************ 00:04:41.632 23:01:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:41.632 * Looking for test storage... 00:04:41.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:41.632 23:01:04 -- setup/driver.sh@68 -- # setup reset 00:04:41.632 23:01:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.632 23:01:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.968 23:01:09 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:46.968 23:01:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.968 23:01:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.968 23:01:09 -- common/autotest_common.sh@10 -- # set +x 00:04:46.968 ************************************ 00:04:46.968 START TEST guess_driver 00:04:46.968 ************************************ 00:04:46.968 23:01:09 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:46.968 23:01:09 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:46.968 23:01:09 -- setup/driver.sh@47 -- # local fail=0 00:04:46.968 23:01:09 -- setup/driver.sh@49 -- # pick_driver 00:04:46.968 23:01:09 -- setup/driver.sh@36 -- # vfio 00:04:46.968 23:01:09 -- setup/driver.sh@21 -- # local iommu_grups 00:04:46.968 23:01:09 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:46.968 23:01:09 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:46.968 23:01:09 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:46.968 23:01:09 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:46.968 23:01:09 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:46.968 23:01:09 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:46.968 23:01:09 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:46.968 23:01:09 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:46.968 23:01:09 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:46.968 23:01:09 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:46.968 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:46.968 23:01:09 -- setup/driver.sh@30 -- # return 0 00:04:46.968 23:01:09 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:46.968 23:01:09 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:46.968 23:01:09 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:46.968 23:01:09 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:46.968 Looking for driver=vfio-pci 00:04:46.968 23:01:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.968 23:01:09 -- setup/driver.sh@45 -- # setup output config 00:04:46.968 23:01:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.968 23:01:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.296 23:01:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.296 23:01:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.296 23:01:12 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.296 23:01:12 -- setup/driver.sh@65 -- # setup reset 00:04:50.296 23:01:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.296 23:01:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.602 00:04:55.602 real 0m8.307s 00:04:55.602 user 0m2.761s 00:04:55.602 sys 0m4.812s 00:04:55.602 23:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.602 23:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.602 ************************************ 00:04:55.602 END TEST guess_driver 00:04:55.602 ************************************ 00:04:55.602 00:04:55.602 real 0m13.220s 00:04:55.602 user 0m4.199s 00:04:55.602 sys 0m7.550s 00:04:55.602 23:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.602 23:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.602 ************************************ 00:04:55.602 END TEST driver 00:04:55.602 ************************************ 00:04:55.602 23:01:17 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:55.602 23:01:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.602 23:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.602 23:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.602 ************************************ 00:04:55.602 START TEST devices 00:04:55.602 ************************************ 00:04:55.602 23:01:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:55.602 * Looking for test storage... 00:04:55.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:55.602 23:01:17 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:55.602 23:01:17 -- setup/devices.sh@192 -- # setup reset 00:04:55.602 23:01:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.602 23:01:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.904 23:01:21 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:58.904 23:01:21 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:58.904 23:01:21 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:58.904 23:01:21 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:58.904 23:01:21 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:58.904 23:01:21 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:58.904 23:01:21 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:58.904 23:01:21 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.904 23:01:21 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:58.904 23:01:21 -- setup/devices.sh@196 -- # blocks=() 00:04:58.904 23:01:21 -- setup/devices.sh@196 -- # declare -a blocks 00:04:58.904 23:01:21 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:58.904 23:01:21 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:58.904 23:01:21 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:58.904 23:01:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.904 23:01:21 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:58.904 23:01:21 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:58.904 23:01:21 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:58.904 23:01:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:58.904 23:01:21 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:58.904 23:01:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:58.904 23:01:21 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:58.904 No valid GPT data, bailing 00:04:58.904 23:01:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.904 23:01:21 -- scripts/common.sh@393 -- # pt= 00:04:58.904 23:01:21 -- scripts/common.sh@394 -- # return 1 00:04:58.904 23:01:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:58.904 23:01:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:58.904 23:01:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:58.904 23:01:21 -- setup/common.sh@80 -- # echo 1920383410176 00:04:58.904 23:01:21 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:58.904 23:01:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.904 23:01:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:58.904 23:01:21 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:58.904 23:01:21 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:58.904 23:01:21 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.904 23:01:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.904 23:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.904 23:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.904 ************************************ 00:04:58.904 START TEST nvme_mount 00:04:58.904 ************************************ 00:04:58.904 23:01:21 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:58.904 23:01:21 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:58.904 23:01:21 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:58.904 23:01:21 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.904 23:01:21 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.904 23:01:21 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:58.904 23:01:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.904 23:01:21 -- setup/common.sh@40 -- # local part_no=1 00:04:58.904 23:01:21 -- setup/common.sh@41 -- # local size=1073741824 00:04:58.904 23:01:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.904 23:01:21 -- setup/common.sh@44 -- # parts=() 00:04:58.904 23:01:21 -- setup/common.sh@44 -- # local parts 00:04:58.904 23:01:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.904 23:01:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.904 23:01:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.904 23:01:21 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.904 23:01:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.904 23:01:21 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.904 23:01:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.904 23:01:21 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:59.847 Creating new GPT entries in memory. 00:04:59.847 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.847 other utilities. 00:04:59.847 23:01:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.847 23:01:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.847 23:01:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.847 23:01:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.847 23:01:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.856 Creating new GPT entries in memory. 00:05:00.856 The operation has completed successfully. 00:05:00.856 23:01:23 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.856 23:01:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.856 23:01:23 -- setup/common.sh@62 -- # wait 2596088 00:05:00.856 23:01:23 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.856 23:01:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:00.856 23:01:23 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.856 23:01:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:00.856 23:01:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:00.856 23:01:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.856 23:01:23 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.856 23:01:23 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:00.856 23:01:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:00.856 23:01:23 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.856 23:01:23 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.856 23:01:23 -- setup/devices.sh@53 -- # local found=0 00:05:00.856 23:01:23 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.856 23:01:23 -- setup/devices.sh@56 -- # : 00:05:00.856 23:01:23 -- setup/devices.sh@59 -- # local pci status 00:05:00.856 23:01:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.856 23:01:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:00.856 23:01:23 -- setup/devices.sh@47 -- # setup output config 00:05:00.856 23:01:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.856 23:01:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:04.159 23:01:26 -- setup/devices.sh@63 -- # found=1 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.159 23:01:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.159 23:01:26 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:04.159 23:01:26 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.159 23:01:26 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.159 23:01:26 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.159 23:01:26 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:04.159 23:01:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.159 23:01:26 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.159 23:01:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.159 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.159 23:01:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.159 23:01:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.421 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.422 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.422 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.422 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.422 23:01:27 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:04.422 23:01:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:04.422 23:01:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.422 23:01:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:04.422 23:01:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:04.422 23:01:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.683 23:01:27 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.683 23:01:27 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:04.683 23:01:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:04.683 23:01:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.683 23:01:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.683 23:01:27 -- setup/devices.sh@53 -- # local found=0 00:05:04.683 23:01:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.683 23:01:27 -- setup/devices.sh@56 -- # : 00:05:04.683 23:01:27 -- setup/devices.sh@59 -- # local pci status 00:05:04.683 23:01:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.683 23:01:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:04.683 23:01:27 -- setup/devices.sh@47 -- # setup output config 00:05:04.683 23:01:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.683 23:01:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:07.988 23:01:30 -- setup/devices.sh@63 -- # found=1 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.988 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.988 23:01:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.988 23:01:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:07.988 23:01:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.988 23:01:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.988 23:01:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.988 23:01:30 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.989 23:01:30 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:07.989 23:01:30 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:07.989 23:01:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:07.989 23:01:30 -- setup/devices.sh@50 -- # local mount_point= 00:05:07.989 23:01:30 -- setup/devices.sh@51 -- # local test_file= 00:05:07.989 23:01:30 -- setup/devices.sh@53 -- # local found=0 00:05:07.989 23:01:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:07.989 23:01:30 -- setup/devices.sh@59 -- # local pci status 00:05:07.989 23:01:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.989 23:01:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:07.989 23:01:30 -- setup/devices.sh@47 -- # setup output config 00:05:07.989 23:01:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.989 23:01:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.199 23:01:33 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:12.199 23:01:33 -- setup/devices.sh@63 -- # found=1 00:05:12.199 23:01:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.199 23:01:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.199 23:01:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:12.199 23:01:34 -- setup/devices.sh@68 -- # return 0 00:05:12.199 23:01:34 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:12.199 23:01:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.199 23:01:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.199 23:01:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.199 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.199 00:05:12.199 real 0m12.977s 00:05:12.199 user 0m3.937s 00:05:12.199 sys 0m6.943s 00:05:12.199 23:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.199 23:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:12.199 ************************************ 00:05:12.199 END TEST nvme_mount 00:05:12.199 ************************************ 00:05:12.199 23:01:34 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:12.199 23:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.199 23:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.199 23:01:34 -- common/autotest_common.sh@10 -- # set +x 00:05:12.199 ************************************ 00:05:12.200 START TEST dm_mount 00:05:12.200 ************************************ 00:05:12.200 23:01:34 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:12.200 23:01:34 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:12.200 23:01:34 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:12.200 23:01:34 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:12.200 23:01:34 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:12.200 23:01:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:12.200 23:01:34 -- setup/common.sh@40 -- # local part_no=2 00:05:12.200 23:01:34 -- setup/common.sh@41 -- # local size=1073741824 00:05:12.200 23:01:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:12.200 23:01:34 -- setup/common.sh@44 -- # parts=() 00:05:12.200 23:01:34 -- setup/common.sh@44 -- # local parts 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.200 23:01:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part++ )) 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.200 23:01:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part++ )) 00:05:12.200 23:01:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:12.200 23:01:34 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:12.200 23:01:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:12.200 23:01:34 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:12.773 Creating new GPT entries in memory. 00:05:12.773 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.773 other utilities. 00:05:12.773 23:01:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.773 23:01:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.773 23:01:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.773 23:01:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.773 23:01:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.719 Creating new GPT entries in memory. 00:05:13.719 The operation has completed successfully. 00:05:13.719 23:01:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.719 23:01:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.719 23:01:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.719 23:01:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.719 23:01:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:14.662 The operation has completed successfully. 00:05:14.662 23:01:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:14.662 23:01:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.662 23:01:37 -- setup/common.sh@62 -- # wait 2601345 00:05:14.662 23:01:37 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:14.662 23:01:37 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.662 23:01:37 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.662 23:01:37 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:14.923 23:01:37 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:14.923 23:01:37 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.923 23:01:37 -- setup/devices.sh@161 -- # break 00:05:14.923 23:01:37 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.923 23:01:37 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:14.923 23:01:37 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:14.923 23:01:37 -- setup/devices.sh@166 -- # dm=dm-1 00:05:14.923 23:01:37 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:14.923 23:01:37 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:14.923 23:01:37 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.923 23:01:37 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:14.923 23:01:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.923 23:01:37 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.923 23:01:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:14.923 23:01:37 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.923 23:01:37 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.923 23:01:37 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:14.923 23:01:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:14.923 23:01:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.923 23:01:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.923 23:01:37 -- setup/devices.sh@53 -- # local found=0 00:05:14.923 23:01:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.923 23:01:37 -- setup/devices.sh@56 -- # : 00:05:14.923 23:01:37 -- setup/devices.sh@59 -- # local pci status 00:05:14.923 23:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.923 23:01:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:14.923 23:01:37 -- setup/devices.sh@47 -- # setup output config 00:05:14.923 23:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.923 23:01:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:18.225 23:01:40 -- setup/devices.sh@63 -- # found=1 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.225 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.225 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.226 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.226 23:01:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.226 23:01:40 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:18.226 23:01:40 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.226 23:01:40 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.226 23:01:40 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.226 23:01:40 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.486 23:01:40 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:05:18.486 23:01:40 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:18.486 23:01:40 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:05:18.486 23:01:40 -- setup/devices.sh@50 -- # local mount_point= 00:05:18.486 23:01:40 -- setup/devices.sh@51 -- # local test_file= 00:05:18.486 23:01:40 -- setup/devices.sh@53 -- # local found=0 00:05:18.486 23:01:40 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.486 23:01:40 -- setup/devices.sh@59 -- # local pci status 00:05:18.486 23:01:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.486 23:01:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:18.486 23:01:40 -- setup/devices.sh@47 -- # setup output config 00:05:18.486 23:01:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.486 23:01:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:21.787 23:01:44 -- setup/devices.sh@63 -- # found=1 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.787 23:01:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.787 23:01:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.787 23:01:44 -- setup/devices.sh@68 -- # return 0 00:05:21.787 23:01:44 -- setup/devices.sh@187 -- # cleanup_dm 00:05:21.787 23:01:44 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.787 23:01:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.787 23:01:44 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:21.787 23:01:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:21.787 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.787 23:01:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.787 23:01:44 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:21.787 00:05:21.787 real 0m10.180s 00:05:21.787 user 0m2.624s 00:05:21.787 sys 0m4.618s 00:05:21.787 23:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.787 23:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.787 ************************************ 00:05:21.787 END TEST dm_mount 00:05:21.787 ************************************ 00:05:22.047 23:01:44 -- setup/devices.sh@1 -- # cleanup 00:05:22.047 23:01:44 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.047 23:01:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.047 23:01:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.047 23:01:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.047 23:01:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.047 23:01:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.308 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.308 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.308 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.308 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.308 23:01:44 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.308 23:01:44 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.308 23:01:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.308 23:01:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.308 23:01:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.308 23:01:44 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.308 23:01:44 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.308 00:05:22.308 real 0m27.295s 00:05:22.308 user 0m7.885s 00:05:22.308 sys 0m14.187s 00:05:22.308 23:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.308 23:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:22.308 ************************************ 00:05:22.308 END TEST devices 00:05:22.308 ************************************ 00:05:22.308 00:05:22.308 real 1m31.199s 00:05:22.308 user 0m30.172s 00:05:22.308 sys 0m52.415s 00:05:22.308 23:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.308 23:01:44 -- common/autotest_common.sh@10 -- # set +x 00:05:22.308 ************************************ 00:05:22.308 END TEST setup.sh 00:05:22.308 ************************************ 00:05:22.308 23:01:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:25.608 Hugepages 00:05:25.608 node hugesize free / total 00:05:25.608 node0 1048576kB 0 / 0 00:05:25.608 node0 2048kB 2048 / 2048 00:05:25.608 node1 1048576kB 0 / 0 00:05:25.608 node1 2048kB 0 / 0 00:05:25.608 00:05:25.608 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.608 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:25.608 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:25.608 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:25.608 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:25.608 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:25.608 23:01:48 -- spdk/autotest.sh@141 -- # uname -s 00:05:25.608 23:01:48 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:25.608 23:01:48 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:25.608 23:01:48 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.819 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.819 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:31.202 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:31.202 23:01:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:32.143 23:01:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:32.143 23:01:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:32.143 23:01:54 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:32.143 23:01:54 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:32.143 23:01:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:32.143 23:01:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:32.143 23:01:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.143 23:01:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:32.143 23:01:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:32.143 23:01:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:32.143 23:01:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:32.143 23:01:54 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:35.535 Waiting for block devices as requested 00:05:35.535 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:35.535 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:35.535 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:35.535 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:35.535 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:35.535 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:35.795 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:35.795 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:35.795 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:36.056 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:36.056 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:36.057 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:36.057 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:36.318 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:36.318 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:36.318 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:36.579 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:36.579 23:01:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:36.579 23:01:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:36.579 23:01:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:36.579 23:01:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:36.579 23:01:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:36.579 23:01:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:36.579 23:01:59 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:36.579 23:01:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:36.579 23:01:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:36.579 23:01:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:36.579 23:01:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:36.579 23:01:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:36.579 23:01:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:36.579 23:01:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:36.579 23:01:59 -- common/autotest_common.sh@1542 -- # continue 00:05:36.579 23:01:59 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:36.579 23:01:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.579 23:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.579 23:01:59 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:36.579 23:01:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.579 23:01:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.579 23:01:59 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:39.885 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:39.885 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.294 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:40.294 23:02:02 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:40.294 23:02:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.294 23:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 23:02:02 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:40.294 23:02:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:40.294 23:02:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.294 23:02:02 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:40.294 23:02:02 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:40.294 23:02:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:40.294 23:02:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:40.294 23:02:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:40.294 23:02:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.294 23:02:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.294 23:02:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:40.294 23:02:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:40.294 23:02:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:40.294 23:02:02 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:40.294 23:02:02 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:40.294 23:02:02 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:40.294 23:02:02 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:40.294 23:02:02 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:40.294 23:02:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:40.294 23:02:02 -- common/autotest_common.sh@1578 -- # return 0 00:05:40.294 23:02:02 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:40.294 23:02:02 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:40.294 23:02:02 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:40.294 23:02:02 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:40.294 23:02:02 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:40.294 23:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:40.294 23:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 23:02:02 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.294 23:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.294 23:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.294 23:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:40.294 ************************************ 00:05:40.294 START TEST env 00:05:40.294 ************************************ 00:05:40.294 23:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.559 * Looking for test storage... 00:05:40.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:40.559 23:02:03 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.559 23:02:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.559 23:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.559 23:02:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.559 ************************************ 00:05:40.559 START TEST env_memory 00:05:40.559 ************************************ 00:05:40.559 23:02:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.559 00:05:40.559 00:05:40.559 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.559 http://cunit.sourceforge.net/ 00:05:40.559 00:05:40.559 00:05:40.559 Suite: memory 00:05:40.559 Test: alloc and free memory map ...[2024-06-07 23:02:03.106889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.559 passed 00:05:40.559 Test: mem map translation ...[2024-06-07 23:02:03.132633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:40.559 [2024-06-07 23:02:03.132666] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:40.559 [2024-06-07 23:02:03.132714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:40.559 [2024-06-07 23:02:03.132722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:40.559 passed 00:05:40.559 Test: mem map registration ...[2024-06-07 23:02:03.188051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:40.559 [2024-06-07 23:02:03.188075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:40.559 passed 00:05:40.820 Test: mem map adjacent registrations ...passed 00:05:40.820 00:05:40.820 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.820 suites 1 1 n/a 0 0 00:05:40.820 tests 4 4 4 0 0 00:05:40.820 asserts 152 152 152 0 n/a 00:05:40.820 00:05:40.820 Elapsed time = 0.200 seconds 00:05:40.820 00:05:40.820 real 0m0.215s 00:05:40.820 user 0m0.208s 00:05:40.820 sys 0m0.006s 00:05:40.820 23:02:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.820 23:02:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.820 ************************************ 00:05:40.820 END TEST env_memory 00:05:40.820 ************************************ 00:05:40.820 23:02:03 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.820 23:02:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.820 23:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.820 23:02:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.820 ************************************ 00:05:40.820 START TEST env_vtophys 00:05:40.820 ************************************ 00:05:40.820 23:02:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.820 EAL: lib.eal log level changed from notice to debug 00:05:40.820 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.820 EAL: Detected lcore 1 as core 1 on socket 0 00:05:40.820 EAL: Detected lcore 2 as core 2 on socket 0 00:05:40.820 EAL: Detected lcore 3 as core 3 on socket 0 00:05:40.820 EAL: Detected lcore 4 as core 4 on socket 0 00:05:40.820 EAL: Detected lcore 5 as core 5 on socket 0 00:05:40.820 EAL: Detected lcore 6 as core 6 on socket 0 00:05:40.820 EAL: Detected lcore 7 as core 7 on socket 0 00:05:40.820 EAL: Detected lcore 8 as core 8 on socket 0 00:05:40.820 EAL: Detected lcore 9 as core 9 on socket 0 00:05:40.820 EAL: Detected lcore 10 as core 10 on socket 0 00:05:40.820 EAL: Detected lcore 11 as core 11 on socket 0 00:05:40.820 EAL: Detected lcore 12 as core 12 on socket 0 00:05:40.820 EAL: Detected lcore 13 as core 13 on socket 0 00:05:40.820 EAL: Detected lcore 14 as core 14 on socket 0 00:05:40.820 EAL: Detected lcore 15 as core 15 on socket 0 00:05:40.820 EAL: Detected lcore 16 as core 16 on socket 0 00:05:40.820 EAL: Detected lcore 17 as core 17 on socket 0 00:05:40.820 EAL: Detected lcore 18 as core 18 on socket 0 00:05:40.820 EAL: Detected lcore 19 as core 19 on socket 0 00:05:40.820 EAL: Detected lcore 20 as core 20 on socket 0 00:05:40.820 EAL: Detected lcore 21 as core 21 on socket 0 00:05:40.820 EAL: Detected lcore 22 as core 22 on socket 0 00:05:40.820 EAL: Detected lcore 23 as core 23 on socket 0 00:05:40.820 EAL: Detected lcore 24 as core 24 on socket 0 00:05:40.820 EAL: Detected lcore 25 as core 25 on socket 0 00:05:40.820 EAL: Detected lcore 26 as core 26 on socket 0 00:05:40.820 EAL: Detected lcore 27 as core 27 on socket 0 00:05:40.820 EAL: Detected lcore 28 as core 28 on socket 0 00:05:40.820 EAL: Detected lcore 29 as core 29 on socket 0 00:05:40.820 EAL: Detected lcore 30 as core 30 on socket 0 00:05:40.820 EAL: Detected lcore 31 as core 31 on socket 0 00:05:40.820 EAL: Detected lcore 32 as core 32 on socket 0 00:05:40.820 EAL: Detected lcore 33 as core 33 on socket 0 00:05:40.821 EAL: Detected lcore 34 as core 34 on socket 0 00:05:40.821 EAL: Detected lcore 35 as core 35 on socket 0 00:05:40.821 EAL: Detected lcore 36 as core 0 on socket 1 00:05:40.821 EAL: Detected lcore 37 as core 1 on socket 1 00:05:40.821 EAL: Detected lcore 38 as core 2 on socket 1 00:05:40.821 EAL: Detected lcore 39 as core 3 on socket 1 00:05:40.821 EAL: Detected lcore 40 as core 4 on socket 1 00:05:40.821 EAL: Detected lcore 41 as core 5 on socket 1 00:05:40.821 EAL: Detected lcore 42 as core 6 on socket 1 00:05:40.821 EAL: Detected lcore 43 as core 7 on socket 1 00:05:40.821 EAL: Detected lcore 44 as core 8 on socket 1 00:05:40.821 EAL: Detected lcore 45 as core 9 on socket 1 00:05:40.821 EAL: Detected lcore 46 as core 10 on socket 1 00:05:40.821 EAL: Detected lcore 47 as core 11 on socket 1 00:05:40.821 EAL: Detected lcore 48 as core 12 on socket 1 00:05:40.821 EAL: Detected lcore 49 as core 13 on socket 1 00:05:40.821 EAL: Detected lcore 50 as core 14 on socket 1 00:05:40.821 EAL: Detected lcore 51 as core 15 on socket 1 00:05:40.821 EAL: Detected lcore 52 as core 16 on socket 1 00:05:40.821 EAL: Detected lcore 53 as core 17 on socket 1 00:05:40.821 EAL: Detected lcore 54 as core 18 on socket 1 00:05:40.821 EAL: Detected lcore 55 as core 19 on socket 1 00:05:40.821 EAL: Detected lcore 56 as core 20 on socket 1 00:05:40.821 EAL: Detected lcore 57 as core 21 on socket 1 00:05:40.821 EAL: Detected lcore 58 as core 22 on socket 1 00:05:40.821 EAL: Detected lcore 59 as core 23 on socket 1 00:05:40.821 EAL: Detected lcore 60 as core 24 on socket 1 00:05:40.821 EAL: Detected lcore 61 as core 25 on socket 1 00:05:40.821 EAL: Detected lcore 62 as core 26 on socket 1 00:05:40.821 EAL: Detected lcore 63 as core 27 on socket 1 00:05:40.821 EAL: Detected lcore 64 as core 28 on socket 1 00:05:40.821 EAL: Detected lcore 65 as core 29 on socket 1 00:05:40.821 EAL: Detected lcore 66 as core 30 on socket 1 00:05:40.821 EAL: Detected lcore 67 as core 31 on socket 1 00:05:40.821 EAL: Detected lcore 68 as core 32 on socket 1 00:05:40.821 EAL: Detected lcore 69 as core 33 on socket 1 00:05:40.821 EAL: Detected lcore 70 as core 34 on socket 1 00:05:40.821 EAL: Detected lcore 71 as core 35 on socket 1 00:05:40.821 EAL: Detected lcore 72 as core 0 on socket 0 00:05:40.821 EAL: Detected lcore 73 as core 1 on socket 0 00:05:40.821 EAL: Detected lcore 74 as core 2 on socket 0 00:05:40.821 EAL: Detected lcore 75 as core 3 on socket 0 00:05:40.821 EAL: Detected lcore 76 as core 4 on socket 0 00:05:40.821 EAL: Detected lcore 77 as core 5 on socket 0 00:05:40.821 EAL: Detected lcore 78 as core 6 on socket 0 00:05:40.821 EAL: Detected lcore 79 as core 7 on socket 0 00:05:40.821 EAL: Detected lcore 80 as core 8 on socket 0 00:05:40.821 EAL: Detected lcore 81 as core 9 on socket 0 00:05:40.821 EAL: Detected lcore 82 as core 10 on socket 0 00:05:40.821 EAL: Detected lcore 83 as core 11 on socket 0 00:05:40.821 EAL: Detected lcore 84 as core 12 on socket 0 00:05:40.821 EAL: Detected lcore 85 as core 13 on socket 0 00:05:40.821 EAL: Detected lcore 86 as core 14 on socket 0 00:05:40.821 EAL: Detected lcore 87 as core 15 on socket 0 00:05:40.821 EAL: Detected lcore 88 as core 16 on socket 0 00:05:40.821 EAL: Detected lcore 89 as core 17 on socket 0 00:05:40.821 EAL: Detected lcore 90 as core 18 on socket 0 00:05:40.821 EAL: Detected lcore 91 as core 19 on socket 0 00:05:40.821 EAL: Detected lcore 92 as core 20 on socket 0 00:05:40.821 EAL: Detected lcore 93 as core 21 on socket 0 00:05:40.821 EAL: Detected lcore 94 as core 22 on socket 0 00:05:40.821 EAL: Detected lcore 95 as core 23 on socket 0 00:05:40.821 EAL: Detected lcore 96 as core 24 on socket 0 00:05:40.821 EAL: Detected lcore 97 as core 25 on socket 0 00:05:40.821 EAL: Detected lcore 98 as core 26 on socket 0 00:05:40.821 EAL: Detected lcore 99 as core 27 on socket 0 00:05:40.821 EAL: Detected lcore 100 as core 28 on socket 0 00:05:40.821 EAL: Detected lcore 101 as core 29 on socket 0 00:05:40.821 EAL: Detected lcore 102 as core 30 on socket 0 00:05:40.821 EAL: Detected lcore 103 as core 31 on socket 0 00:05:40.821 EAL: Detected lcore 104 as core 32 on socket 0 00:05:40.821 EAL: Detected lcore 105 as core 33 on socket 0 00:05:40.821 EAL: Detected lcore 106 as core 34 on socket 0 00:05:40.821 EAL: Detected lcore 107 as core 35 on socket 0 00:05:40.821 EAL: Detected lcore 108 as core 0 on socket 1 00:05:40.821 EAL: Detected lcore 109 as core 1 on socket 1 00:05:40.821 EAL: Detected lcore 110 as core 2 on socket 1 00:05:40.821 EAL: Detected lcore 111 as core 3 on socket 1 00:05:40.821 EAL: Detected lcore 112 as core 4 on socket 1 00:05:40.821 EAL: Detected lcore 113 as core 5 on socket 1 00:05:40.821 EAL: Detected lcore 114 as core 6 on socket 1 00:05:40.821 EAL: Detected lcore 115 as core 7 on socket 1 00:05:40.821 EAL: Detected lcore 116 as core 8 on socket 1 00:05:40.821 EAL: Detected lcore 117 as core 9 on socket 1 00:05:40.821 EAL: Detected lcore 118 as core 10 on socket 1 00:05:40.821 EAL: Detected lcore 119 as core 11 on socket 1 00:05:40.821 EAL: Detected lcore 120 as core 12 on socket 1 00:05:40.821 EAL: Detected lcore 121 as core 13 on socket 1 00:05:40.821 EAL: Detected lcore 122 as core 14 on socket 1 00:05:40.821 EAL: Detected lcore 123 as core 15 on socket 1 00:05:40.821 EAL: Detected lcore 124 as core 16 on socket 1 00:05:40.821 EAL: Detected lcore 125 as core 17 on socket 1 00:05:40.821 EAL: Detected lcore 126 as core 18 on socket 1 00:05:40.821 EAL: Detected lcore 127 as core 19 on socket 1 00:05:40.821 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:40.821 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:40.821 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:40.821 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:40.821 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:40.821 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:40.821 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:40.821 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:40.821 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:40.821 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:40.821 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:40.821 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:40.821 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:40.821 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:40.821 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:40.821 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:40.821 EAL: Maximum logical cores by configuration: 128 00:05:40.821 EAL: Detected CPU lcores: 128 00:05:40.821 EAL: Detected NUMA nodes: 2 00:05:40.821 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:40.821 EAL: Detected shared linkage of DPDK 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:40.821 EAL: Registered [vdev] bus. 00:05:40.821 EAL: bus.vdev log level changed from disabled to notice 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:40.821 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:40.821 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:40.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:40.821 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.821 EAL: No shared files mode enabled, IPC is disabled 00:05:40.821 EAL: Bus pci wants IOVA as 'DC' 00:05:40.821 EAL: Bus vdev wants IOVA as 'DC' 00:05:40.821 EAL: Buses did not request a specific IOVA mode. 00:05:40.821 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:40.821 EAL: Selected IOVA mode 'VA' 00:05:40.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.821 EAL: Probing VFIO support... 00:05:40.821 EAL: IOMMU type 1 (Type 1) is supported 00:05:40.821 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:40.821 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:40.821 EAL: VFIO support initialized 00:05:40.821 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.821 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.821 EAL: Setting up physically contiguous memory... 00:05:40.821 EAL: Setting maximum number of open files to 524288 00:05:40.821 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.821 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:40.821 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.821 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.821 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.821 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.821 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.821 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.821 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.821 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.821 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.821 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.821 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.821 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.821 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.821 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.821 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.821 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.821 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.821 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:40.821 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.821 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:40.821 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.821 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.821 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:40.822 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:40.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.822 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:40.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.822 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:40.822 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:40.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.822 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:40.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.822 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:40.822 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:40.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.822 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:40.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.822 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:40.822 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:40.822 EAL: Hugepages will be freed exactly as allocated. 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: TSC frequency is ~2400000 KHz 00:05:40.822 EAL: Main lcore 0 is ready (tid=7fba7cee3a00;cpuset=[0]) 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 0 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.822 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.822 00:05:40.822 00:05:40.822 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.822 http://cunit.sourceforge.net/ 00:05:40.822 00:05:40.822 00:05:40.822 Suite: components_suite 00:05:40.822 Test: vtophys_malloc_test ...passed 00:05:40.822 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.822 EAL: Restoring previous memory policy: 4 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.822 EAL: request: mp_malloc_sync 00:05:40.822 EAL: No shared files mode enabled, IPC is disabled 00:05:40.822 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.822 EAL: Trying to obtain current memory policy. 00:05:40.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.082 EAL: Restoring previous memory policy: 4 00:05:41.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.082 EAL: request: mp_malloc_sync 00:05:41.082 EAL: No shared files mode enabled, IPC is disabled 00:05:41.082 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.082 EAL: request: mp_malloc_sync 00:05:41.082 EAL: No shared files mode enabled, IPC is disabled 00:05:41.082 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.082 EAL: Trying to obtain current memory policy. 00:05:41.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.082 EAL: Restoring previous memory policy: 4 00:05:41.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.082 EAL: request: mp_malloc_sync 00:05:41.082 EAL: No shared files mode enabled, IPC is disabled 00:05:41.082 EAL: Heap on socket 0 was expanded by 514MB 00:05:41.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.082 EAL: request: mp_malloc_sync 00:05:41.082 EAL: No shared files mode enabled, IPC is disabled 00:05:41.082 EAL: Heap on socket 0 was shrunk by 514MB 00:05:41.082 EAL: Trying to obtain current memory policy. 00:05:41.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.342 EAL: Restoring previous memory policy: 4 00:05:41.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.342 EAL: request: mp_malloc_sync 00:05:41.342 EAL: No shared files mode enabled, IPC is disabled 00:05:41.342 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.603 EAL: request: mp_malloc_sync 00:05:41.603 EAL: No shared files mode enabled, IPC is disabled 00:05:41.603 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:41.603 passed 00:05:41.603 00:05:41.603 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.603 suites 1 1 n/a 0 0 00:05:41.603 tests 2 2 2 0 0 00:05:41.603 asserts 497 497 497 0 n/a 00:05:41.603 00:05:41.603 Elapsed time = 0.644 seconds 00:05:41.603 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.603 EAL: request: mp_malloc_sync 00:05:41.603 EAL: No shared files mode enabled, IPC is disabled 00:05:41.603 EAL: Heap on socket 0 was shrunk by 2MB 00:05:41.603 EAL: No shared files mode enabled, IPC is disabled 00:05:41.603 EAL: No shared files mode enabled, IPC is disabled 00:05:41.603 EAL: No shared files mode enabled, IPC is disabled 00:05:41.603 00:05:41.603 real 0m0.767s 00:05:41.603 user 0m0.415s 00:05:41.603 sys 0m0.318s 00:05:41.603 23:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.603 23:02:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 END TEST env_vtophys 00:05:41.603 ************************************ 00:05:41.603 23:02:04 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.603 23:02:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.603 23:02:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.603 23:02:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 START TEST env_pci 00:05:41.603 ************************************ 00:05:41.603 23:02:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.603 00:05:41.603 00:05:41.603 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.603 http://cunit.sourceforge.net/ 00:05:41.603 00:05:41.603 00:05:41.603 Suite: pci 00:05:41.603 Test: pci_hook ...[2024-06-07 23:02:04.143294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2612506 has claimed it 00:05:41.603 EAL: Cannot find device (10000:00:01.0) 00:05:41.603 EAL: Failed to attach device on primary process 00:05:41.603 passed 00:05:41.603 00:05:41.603 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.603 suites 1 1 n/a 0 0 00:05:41.603 tests 1 1 1 0 0 00:05:41.603 asserts 25 25 25 0 n/a 00:05:41.603 00:05:41.603 Elapsed time = 0.032 seconds 00:05:41.603 00:05:41.603 real 0m0.051s 00:05:41.603 user 0m0.016s 00:05:41.603 sys 0m0.035s 00:05:41.603 23:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.603 23:02:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 END TEST env_pci 00:05:41.603 ************************************ 00:05:41.603 23:02:04 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.603 23:02:04 -- env/env.sh@15 -- # uname 00:05:41.603 23:02:04 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.603 23:02:04 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.603 23:02:04 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.603 23:02:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:41.603 23:02:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.603 23:02:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 START TEST env_dpdk_post_init 00:05:41.603 ************************************ 00:05:41.603 23:02:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.603 EAL: Detected CPU lcores: 128 00:05:41.603 EAL: Detected NUMA nodes: 2 00:05:41.603 EAL: Detected shared linkage of DPDK 00:05:41.603 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.864 EAL: Selected IOVA mode 'VA' 00:05:41.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.864 EAL: VFIO support initialized 00:05:41.864 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.864 EAL: Using IOMMU type 1 (Type 1) 00:05:41.864 EAL: Ignore mapping IO port bar(1) 00:05:42.124 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:42.124 EAL: Ignore mapping IO port bar(1) 00:05:42.124 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:42.385 EAL: Ignore mapping IO port bar(1) 00:05:42.385 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:42.645 EAL: Ignore mapping IO port bar(1) 00:05:42.645 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:42.906 EAL: Ignore mapping IO port bar(1) 00:05:42.906 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:42.906 EAL: Ignore mapping IO port bar(1) 00:05:43.166 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:43.166 EAL: Ignore mapping IO port bar(1) 00:05:43.427 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:43.427 EAL: Ignore mapping IO port bar(1) 00:05:43.687 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:43.687 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:43.947 EAL: Ignore mapping IO port bar(1) 00:05:43.947 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:44.207 EAL: Ignore mapping IO port bar(1) 00:05:44.207 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:44.473 EAL: Ignore mapping IO port bar(1) 00:05:44.473 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:44.473 EAL: Ignore mapping IO port bar(1) 00:05:44.734 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:44.734 EAL: Ignore mapping IO port bar(1) 00:05:44.994 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:44.994 EAL: Ignore mapping IO port bar(1) 00:05:45.255 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:45.255 EAL: Ignore mapping IO port bar(1) 00:05:45.255 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:45.516 EAL: Ignore mapping IO port bar(1) 00:05:45.516 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:45.516 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:45.516 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:45.776 Starting DPDK initialization... 00:05:45.776 Starting SPDK post initialization... 00:05:45.776 SPDK NVMe probe 00:05:45.776 Attaching to 0000:65:00.0 00:05:45.776 Attached to 0000:65:00.0 00:05:45.776 Cleaning up... 00:05:47.690 00:05:47.690 real 0m5.714s 00:05:47.690 user 0m0.174s 00:05:47.690 sys 0m0.087s 00:05:47.690 23:02:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.690 23:02:09 -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 END TEST env_dpdk_post_init 00:05:47.690 ************************************ 00:05:47.690 23:02:09 -- env/env.sh@26 -- # uname 00:05:47.690 23:02:09 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.690 23:02:09 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.690 23:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.690 23:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.690 23:02:09 -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 START TEST env_mem_callbacks 00:05:47.690 ************************************ 00:05:47.690 23:02:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.690 EAL: Detected CPU lcores: 128 00:05:47.690 EAL: Detected NUMA nodes: 2 00:05:47.690 EAL: Detected shared linkage of DPDK 00:05:47.690 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.690 EAL: Selected IOVA mode 'VA' 00:05:47.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.690 EAL: VFIO support initialized 00:05:47.690 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.690 00:05:47.690 00:05:47.690 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.690 http://cunit.sourceforge.net/ 00:05:47.690 00:05:47.690 00:05:47.690 Suite: memory 00:05:47.690 Test: test ... 00:05:47.690 register 0x200000200000 2097152 00:05:47.690 malloc 3145728 00:05:47.690 register 0x200000400000 4194304 00:05:47.690 buf 0x200000500000 len 3145728 PASSED 00:05:47.690 malloc 64 00:05:47.690 buf 0x2000004fff40 len 64 PASSED 00:05:47.690 malloc 4194304 00:05:47.690 register 0x200000800000 6291456 00:05:47.690 buf 0x200000a00000 len 4194304 PASSED 00:05:47.690 free 0x200000500000 3145728 00:05:47.690 free 0x2000004fff40 64 00:05:47.690 unregister 0x200000400000 4194304 PASSED 00:05:47.690 free 0x200000a00000 4194304 00:05:47.690 unregister 0x200000800000 6291456 PASSED 00:05:47.690 malloc 8388608 00:05:47.690 register 0x200000400000 10485760 00:05:47.690 buf 0x200000600000 len 8388608 PASSED 00:05:47.690 free 0x200000600000 8388608 00:05:47.690 unregister 0x200000400000 10485760 PASSED 00:05:47.690 passed 00:05:47.690 00:05:47.690 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.690 suites 1 1 n/a 0 0 00:05:47.690 tests 1 1 1 0 0 00:05:47.690 asserts 15 15 15 0 n/a 00:05:47.690 00:05:47.690 Elapsed time = 0.005 seconds 00:05:47.690 00:05:47.690 real 0m0.056s 00:05:47.690 user 0m0.015s 00:05:47.690 sys 0m0.040s 00:05:47.690 23:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.690 23:02:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 END TEST env_mem_callbacks 00:05:47.690 ************************************ 00:05:47.690 00:05:47.690 real 0m7.132s 00:05:47.690 user 0m0.940s 00:05:47.690 sys 0m0.745s 00:05:47.690 23:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.690 23:02:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 END TEST env 00:05:47.690 ************************************ 00:05:47.690 23:02:10 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.690 23:02:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.690 23:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.690 23:02:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.690 ************************************ 00:05:47.690 START TEST rpc 00:05:47.690 ************************************ 00:05:47.690 23:02:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.690 * Looking for test storage... 00:05:47.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.690 23:02:10 -- rpc/rpc.sh@65 -- # spdk_pid=2613816 00:05:47.690 23:02:10 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.690 23:02:10 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:47.690 23:02:10 -- rpc/rpc.sh@67 -- # waitforlisten 2613816 00:05:47.690 23:02:10 -- common/autotest_common.sh@819 -- # '[' -z 2613816 ']' 00:05:47.690 23:02:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.690 23:02:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.691 23:02:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.691 23:02:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.691 23:02:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.691 [2024-06-07 23:02:10.273621] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:47.691 [2024-06-07 23:02:10.273676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2613816 ] 00:05:47.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.691 [2024-06-07 23:02:10.333748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.691 [2024-06-07 23:02:10.363587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.691 [2024-06-07 23:02:10.363708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.691 [2024-06-07 23:02:10.363717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2613816' to capture a snapshot of events at runtime. 00:05:47.691 [2024-06-07 23:02:10.363725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2613816 for offline analysis/debug. 00:05:47.691 [2024-06-07 23:02:10.363744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.631 23:02:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.631 23:02:10 -- common/autotest_common.sh@852 -- # return 0 00:05:48.631 23:02:10 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.631 23:02:10 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.631 23:02:10 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.631 23:02:10 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.631 23:02:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.631 23:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.631 23:02:10 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 ************************************ 00:05:48.631 START TEST rpc_integrity 00:05:48.631 ************************************ 00:05:48.631 23:02:11 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:48.631 23:02:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.631 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.631 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.631 23:02:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.631 23:02:11 -- rpc/rpc.sh@13 -- # jq length 00:05:48.631 23:02:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.631 23:02:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.631 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.631 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.631 23:02:11 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.631 23:02:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.631 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.631 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.631 23:02:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.631 { 00:05:48.631 "name": "Malloc0", 00:05:48.631 "aliases": [ 00:05:48.631 "802eec4b-69d0-43e8-9680-d6987419f2ad" 00:05:48.631 ], 00:05:48.631 "product_name": "Malloc disk", 00:05:48.631 "block_size": 512, 00:05:48.631 "num_blocks": 16384, 00:05:48.631 "uuid": "802eec4b-69d0-43e8-9680-d6987419f2ad", 00:05:48.631 "assigned_rate_limits": { 00:05:48.631 "rw_ios_per_sec": 0, 00:05:48.631 "rw_mbytes_per_sec": 0, 00:05:48.631 "r_mbytes_per_sec": 0, 00:05:48.631 "w_mbytes_per_sec": 0 00:05:48.631 }, 00:05:48.631 "claimed": false, 00:05:48.631 "zoned": false, 00:05:48.631 "supported_io_types": { 00:05:48.631 "read": true, 00:05:48.631 "write": true, 00:05:48.631 "unmap": true, 00:05:48.631 "write_zeroes": true, 00:05:48.631 "flush": true, 00:05:48.631 "reset": true, 00:05:48.631 "compare": false, 00:05:48.631 "compare_and_write": false, 00:05:48.631 "abort": true, 00:05:48.631 "nvme_admin": false, 00:05:48.631 "nvme_io": false 00:05:48.631 }, 00:05:48.631 "memory_domains": [ 00:05:48.631 { 00:05:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.631 "dma_device_type": 2 00:05:48.631 } 00:05:48.631 ], 00:05:48.631 "driver_specific": {} 00:05:48.631 } 00:05:48.631 ]' 00:05:48.631 23:02:11 -- rpc/rpc.sh@17 -- # jq length 00:05:48.631 23:02:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.631 23:02:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.631 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.631 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 [2024-06-07 23:02:11.133198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.631 [2024-06-07 23:02:11.133233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.631 [2024-06-07 23:02:11.133251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd49a70 00:05:48.631 [2024-06-07 23:02:11.133259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.631 [2024-06-07 23:02:11.134552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.631 [2024-06-07 23:02:11.134572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.631 Passthru0 00:05:48.631 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.631 23:02:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.631 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.631 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.631 23:02:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.631 { 00:05:48.631 "name": "Malloc0", 00:05:48.631 "aliases": [ 00:05:48.631 "802eec4b-69d0-43e8-9680-d6987419f2ad" 00:05:48.631 ], 00:05:48.631 "product_name": "Malloc disk", 00:05:48.631 "block_size": 512, 00:05:48.631 "num_blocks": 16384, 00:05:48.631 "uuid": "802eec4b-69d0-43e8-9680-d6987419f2ad", 00:05:48.631 "assigned_rate_limits": { 00:05:48.631 "rw_ios_per_sec": 0, 00:05:48.631 "rw_mbytes_per_sec": 0, 00:05:48.631 "r_mbytes_per_sec": 0, 00:05:48.631 "w_mbytes_per_sec": 0 00:05:48.631 }, 00:05:48.631 "claimed": true, 00:05:48.631 "claim_type": "exclusive_write", 00:05:48.631 "zoned": false, 00:05:48.631 "supported_io_types": { 00:05:48.631 "read": true, 00:05:48.631 "write": true, 00:05:48.631 "unmap": true, 00:05:48.631 "write_zeroes": true, 00:05:48.631 "flush": true, 00:05:48.631 "reset": true, 00:05:48.631 "compare": false, 00:05:48.631 "compare_and_write": false, 00:05:48.631 "abort": true, 00:05:48.631 "nvme_admin": false, 00:05:48.631 "nvme_io": false 00:05:48.631 }, 00:05:48.632 "memory_domains": [ 00:05:48.632 { 00:05:48.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.632 "dma_device_type": 2 00:05:48.632 } 00:05:48.632 ], 00:05:48.632 "driver_specific": {} 00:05:48.632 }, 00:05:48.632 { 00:05:48.632 "name": "Passthru0", 00:05:48.632 "aliases": [ 00:05:48.632 "071ed9cd-368a-564e-9874-73cc311e05ce" 00:05:48.632 ], 00:05:48.632 "product_name": "passthru", 00:05:48.632 "block_size": 512, 00:05:48.632 "num_blocks": 16384, 00:05:48.632 "uuid": "071ed9cd-368a-564e-9874-73cc311e05ce", 00:05:48.632 "assigned_rate_limits": { 00:05:48.632 "rw_ios_per_sec": 0, 00:05:48.632 "rw_mbytes_per_sec": 0, 00:05:48.632 "r_mbytes_per_sec": 0, 00:05:48.632 "w_mbytes_per_sec": 0 00:05:48.632 }, 00:05:48.632 "claimed": false, 00:05:48.632 "zoned": false, 00:05:48.632 "supported_io_types": { 00:05:48.632 "read": true, 00:05:48.632 "write": true, 00:05:48.632 "unmap": true, 00:05:48.632 "write_zeroes": true, 00:05:48.632 "flush": true, 00:05:48.632 "reset": true, 00:05:48.632 "compare": false, 00:05:48.632 "compare_and_write": false, 00:05:48.632 "abort": true, 00:05:48.632 "nvme_admin": false, 00:05:48.632 "nvme_io": false 00:05:48.632 }, 00:05:48.632 "memory_domains": [ 00:05:48.632 { 00:05:48.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.632 "dma_device_type": 2 00:05:48.632 } 00:05:48.632 ], 00:05:48.632 "driver_specific": { 00:05:48.632 "passthru": { 00:05:48.632 "name": "Passthru0", 00:05:48.632 "base_bdev_name": "Malloc0" 00:05:48.632 } 00:05:48.632 } 00:05:48.632 } 00:05:48.632 ]' 00:05:48.632 23:02:11 -- rpc/rpc.sh@21 -- # jq length 00:05:48.632 23:02:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.632 23:02:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.632 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.632 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.632 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.632 23:02:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.632 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.632 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.632 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.632 23:02:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.632 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.632 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.632 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.632 23:02:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.632 23:02:11 -- rpc/rpc.sh@26 -- # jq length 00:05:48.632 23:02:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.632 00:05:48.632 real 0m0.272s 00:05:48.632 user 0m0.171s 00:05:48.632 sys 0m0.042s 00:05:48.632 23:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.632 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.632 ************************************ 00:05:48.632 END TEST rpc_integrity 00:05:48.632 ************************************ 00:05:48.892 23:02:11 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.892 23:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.892 23:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 ************************************ 00:05:48.892 START TEST rpc_plugins 00:05:48.892 ************************************ 00:05:48.892 23:02:11 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:48.892 23:02:11 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.892 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.892 23:02:11 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.892 23:02:11 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.892 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.892 23:02:11 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.892 { 00:05:48.892 "name": "Malloc1", 00:05:48.892 "aliases": [ 00:05:48.892 "0f740ea7-e4f7-4658-b529-356343591748" 00:05:48.892 ], 00:05:48.892 "product_name": "Malloc disk", 00:05:48.892 "block_size": 4096, 00:05:48.892 "num_blocks": 256, 00:05:48.892 "uuid": "0f740ea7-e4f7-4658-b529-356343591748", 00:05:48.892 "assigned_rate_limits": { 00:05:48.892 "rw_ios_per_sec": 0, 00:05:48.892 "rw_mbytes_per_sec": 0, 00:05:48.892 "r_mbytes_per_sec": 0, 00:05:48.892 "w_mbytes_per_sec": 0 00:05:48.892 }, 00:05:48.892 "claimed": false, 00:05:48.892 "zoned": false, 00:05:48.892 "supported_io_types": { 00:05:48.892 "read": true, 00:05:48.892 "write": true, 00:05:48.892 "unmap": true, 00:05:48.892 "write_zeroes": true, 00:05:48.892 "flush": true, 00:05:48.892 "reset": true, 00:05:48.892 "compare": false, 00:05:48.892 "compare_and_write": false, 00:05:48.892 "abort": true, 00:05:48.892 "nvme_admin": false, 00:05:48.892 "nvme_io": false 00:05:48.892 }, 00:05:48.892 "memory_domains": [ 00:05:48.892 { 00:05:48.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.892 "dma_device_type": 2 00:05:48.892 } 00:05:48.892 ], 00:05:48.892 "driver_specific": {} 00:05:48.892 } 00:05:48.892 ]' 00:05:48.892 23:02:11 -- rpc/rpc.sh@32 -- # jq length 00:05:48.892 23:02:11 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.892 23:02:11 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.892 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.892 23:02:11 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.892 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.892 23:02:11 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.892 23:02:11 -- rpc/rpc.sh@36 -- # jq length 00:05:48.892 23:02:11 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.892 00:05:48.892 real 0m0.140s 00:05:48.892 user 0m0.091s 00:05:48.892 sys 0m0.015s 00:05:48.892 23:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 ************************************ 00:05:48.892 END TEST rpc_plugins 00:05:48.892 ************************************ 00:05:48.892 23:02:11 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.892 23:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.892 23:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.892 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 ************************************ 00:05:48.892 START TEST rpc_trace_cmd_test 00:05:48.892 ************************************ 00:05:48.892 23:02:11 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:48.892 23:02:11 -- rpc/rpc.sh@40 -- # local info 00:05:48.892 23:02:11 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.893 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.893 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.893 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.893 23:02:11 -- rpc/rpc.sh@42 -- # info='{ 00:05:48.893 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2613816", 00:05:48.893 "tpoint_group_mask": "0x8", 00:05:48.893 "iscsi_conn": { 00:05:48.893 "mask": "0x2", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "scsi": { 00:05:48.893 "mask": "0x4", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "bdev": { 00:05:48.893 "mask": "0x8", 00:05:48.893 "tpoint_mask": "0xffffffffffffffff" 00:05:48.893 }, 00:05:48.893 "nvmf_rdma": { 00:05:48.893 "mask": "0x10", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "nvmf_tcp": { 00:05:48.893 "mask": "0x20", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "ftl": { 00:05:48.893 "mask": "0x40", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "blobfs": { 00:05:48.893 "mask": "0x80", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "dsa": { 00:05:48.893 "mask": "0x200", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "thread": { 00:05:48.893 "mask": "0x400", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "nvme_pcie": { 00:05:48.893 "mask": "0x800", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "iaa": { 00:05:48.893 "mask": "0x1000", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "nvme_tcp": { 00:05:48.893 "mask": "0x2000", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 }, 00:05:48.893 "bdev_nvme": { 00:05:48.893 "mask": "0x4000", 00:05:48.893 "tpoint_mask": "0x0" 00:05:48.893 } 00:05:48.893 }' 00:05:48.893 23:02:11 -- rpc/rpc.sh@43 -- # jq length 00:05:48.893 23:02:11 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:48.893 23:02:11 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.153 23:02:11 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.153 23:02:11 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.153 23:02:11 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.153 23:02:11 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.153 23:02:11 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.153 23:02:11 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.153 23:02:11 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.153 00:05:49.153 real 0m0.239s 00:05:49.153 user 0m0.204s 00:05:49.153 sys 0m0.025s 00:05:49.153 23:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.153 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.153 ************************************ 00:05:49.153 END TEST rpc_trace_cmd_test 00:05:49.153 ************************************ 00:05:49.153 23:02:11 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.153 23:02:11 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.153 23:02:11 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.153 23:02:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.153 23:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.153 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.153 ************************************ 00:05:49.153 START TEST rpc_daemon_integrity 00:05:49.153 ************************************ 00:05:49.153 23:02:11 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:49.153 23:02:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.153 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.153 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.153 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.153 23:02:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.153 23:02:11 -- rpc/rpc.sh@13 -- # jq length 00:05:49.414 23:02:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.414 23:02:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.414 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.414 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.414 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.414 23:02:11 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.414 23:02:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.414 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.414 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.414 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.414 23:02:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.414 { 00:05:49.414 "name": "Malloc2", 00:05:49.414 "aliases": [ 00:05:49.414 "401f5143-da17-4a1a-ab62-7f93a45ebea4" 00:05:49.414 ], 00:05:49.414 "product_name": "Malloc disk", 00:05:49.414 "block_size": 512, 00:05:49.414 "num_blocks": 16384, 00:05:49.414 "uuid": "401f5143-da17-4a1a-ab62-7f93a45ebea4", 00:05:49.414 "assigned_rate_limits": { 00:05:49.414 "rw_ios_per_sec": 0, 00:05:49.414 "rw_mbytes_per_sec": 0, 00:05:49.414 "r_mbytes_per_sec": 0, 00:05:49.414 "w_mbytes_per_sec": 0 00:05:49.414 }, 00:05:49.414 "claimed": false, 00:05:49.414 "zoned": false, 00:05:49.414 "supported_io_types": { 00:05:49.414 "read": true, 00:05:49.414 "write": true, 00:05:49.414 "unmap": true, 00:05:49.414 "write_zeroes": true, 00:05:49.414 "flush": true, 00:05:49.414 "reset": true, 00:05:49.414 "compare": false, 00:05:49.414 "compare_and_write": false, 00:05:49.414 "abort": true, 00:05:49.414 "nvme_admin": false, 00:05:49.414 "nvme_io": false 00:05:49.414 }, 00:05:49.414 "memory_domains": [ 00:05:49.414 { 00:05:49.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.414 "dma_device_type": 2 00:05:49.414 } 00:05:49.414 ], 00:05:49.414 "driver_specific": {} 00:05:49.414 } 00:05:49.414 ]' 00:05:49.414 23:02:11 -- rpc/rpc.sh@17 -- # jq length 00:05:49.414 23:02:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.414 23:02:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.414 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.414 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.414 [2024-06-07 23:02:11.919321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.414 [2024-06-07 23:02:11.919354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.414 [2024-06-07 23:02:11.919367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd4a9e0 00:05:49.414 [2024-06-07 23:02:11.919374] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.414 [2024-06-07 23:02:11.920574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.415 [2024-06-07 23:02:11.920594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.415 Passthru0 00:05:49.415 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.415 23:02:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.415 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.415 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.415 23:02:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.415 { 00:05:49.415 "name": "Malloc2", 00:05:49.415 "aliases": [ 00:05:49.415 "401f5143-da17-4a1a-ab62-7f93a45ebea4" 00:05:49.415 ], 00:05:49.415 "product_name": "Malloc disk", 00:05:49.415 "block_size": 512, 00:05:49.415 "num_blocks": 16384, 00:05:49.415 "uuid": "401f5143-da17-4a1a-ab62-7f93a45ebea4", 00:05:49.415 "assigned_rate_limits": { 00:05:49.415 "rw_ios_per_sec": 0, 00:05:49.415 "rw_mbytes_per_sec": 0, 00:05:49.415 "r_mbytes_per_sec": 0, 00:05:49.415 "w_mbytes_per_sec": 0 00:05:49.415 }, 00:05:49.415 "claimed": true, 00:05:49.415 "claim_type": "exclusive_write", 00:05:49.415 "zoned": false, 00:05:49.415 "supported_io_types": { 00:05:49.415 "read": true, 00:05:49.415 "write": true, 00:05:49.415 "unmap": true, 00:05:49.415 "write_zeroes": true, 00:05:49.415 "flush": true, 00:05:49.415 "reset": true, 00:05:49.415 "compare": false, 00:05:49.415 "compare_and_write": false, 00:05:49.415 "abort": true, 00:05:49.415 "nvme_admin": false, 00:05:49.415 "nvme_io": false 00:05:49.415 }, 00:05:49.415 "memory_domains": [ 00:05:49.415 { 00:05:49.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.415 "dma_device_type": 2 00:05:49.415 } 00:05:49.415 ], 00:05:49.415 "driver_specific": {} 00:05:49.415 }, 00:05:49.415 { 00:05:49.415 "name": "Passthru0", 00:05:49.415 "aliases": [ 00:05:49.415 "6644234b-6e89-524e-abfa-32872ad2d8e7" 00:05:49.415 ], 00:05:49.415 "product_name": "passthru", 00:05:49.415 "block_size": 512, 00:05:49.415 "num_blocks": 16384, 00:05:49.415 "uuid": "6644234b-6e89-524e-abfa-32872ad2d8e7", 00:05:49.415 "assigned_rate_limits": { 00:05:49.415 "rw_ios_per_sec": 0, 00:05:49.415 "rw_mbytes_per_sec": 0, 00:05:49.415 "r_mbytes_per_sec": 0, 00:05:49.415 "w_mbytes_per_sec": 0 00:05:49.415 }, 00:05:49.415 "claimed": false, 00:05:49.415 "zoned": false, 00:05:49.415 "supported_io_types": { 00:05:49.415 "read": true, 00:05:49.415 "write": true, 00:05:49.415 "unmap": true, 00:05:49.415 "write_zeroes": true, 00:05:49.415 "flush": true, 00:05:49.415 "reset": true, 00:05:49.415 "compare": false, 00:05:49.415 "compare_and_write": false, 00:05:49.415 "abort": true, 00:05:49.415 "nvme_admin": false, 00:05:49.415 "nvme_io": false 00:05:49.415 }, 00:05:49.415 "memory_domains": [ 00:05:49.415 { 00:05:49.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.415 "dma_device_type": 2 00:05:49.415 } 00:05:49.415 ], 00:05:49.415 "driver_specific": { 00:05:49.415 "passthru": { 00:05:49.415 "name": "Passthru0", 00:05:49.415 "base_bdev_name": "Malloc2" 00:05:49.415 } 00:05:49.415 } 00:05:49.415 } 00:05:49.415 ]' 00:05:49.415 23:02:11 -- rpc/rpc.sh@21 -- # jq length 00:05:49.415 23:02:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.415 23:02:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.415 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.415 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 23:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.415 23:02:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.415 23:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.415 23:02:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 23:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.415 23:02:12 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.415 23:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.415 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 23:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.415 23:02:12 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.415 23:02:12 -- rpc/rpc.sh@26 -- # jq length 00:05:49.415 23:02:12 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.415 00:05:49.415 real 0m0.282s 00:05:49.415 user 0m0.175s 00:05:49.415 sys 0m0.037s 00:05:49.415 23:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.415 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.415 ************************************ 00:05:49.415 END TEST rpc_daemon_integrity 00:05:49.415 ************************************ 00:05:49.676 23:02:12 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.676 23:02:12 -- rpc/rpc.sh@84 -- # killprocess 2613816 00:05:49.676 23:02:12 -- common/autotest_common.sh@926 -- # '[' -z 2613816 ']' 00:05:49.676 23:02:12 -- common/autotest_common.sh@930 -- # kill -0 2613816 00:05:49.676 23:02:12 -- common/autotest_common.sh@931 -- # uname 00:05:49.676 23:02:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.676 23:02:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2613816 00:05:49.676 23:02:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.676 23:02:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.676 23:02:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2613816' 00:05:49.676 killing process with pid 2613816 00:05:49.676 23:02:12 -- common/autotest_common.sh@945 -- # kill 2613816 00:05:49.676 23:02:12 -- common/autotest_common.sh@950 -- # wait 2613816 00:05:49.676 00:05:49.676 real 0m2.210s 00:05:49.676 user 0m2.902s 00:05:49.676 sys 0m0.564s 00:05:49.676 23:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.676 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.676 ************************************ 00:05:49.676 END TEST rpc 00:05:49.676 ************************************ 00:05:49.937 23:02:12 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.937 23:02:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.937 23:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.937 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 ************************************ 00:05:49.937 START TEST rpc_client 00:05:49.937 ************************************ 00:05:49.937 23:02:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:49.937 * Looking for test storage... 00:05:49.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:49.937 23:02:12 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:49.937 OK 00:05:49.937 23:02:12 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.937 00:05:49.937 real 0m0.120s 00:05:49.937 user 0m0.058s 00:05:49.937 sys 0m0.071s 00:05:49.937 23:02:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.937 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 ************************************ 00:05:49.937 END TEST rpc_client 00:05:49.937 ************************************ 00:05:49.937 23:02:12 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:49.937 23:02:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.937 23:02:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.937 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 ************************************ 00:05:49.937 START TEST json_config 00:05:49.937 ************************************ 00:05:49.937 23:02:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.202 23:02:12 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.202 23:02:12 -- nvmf/common.sh@7 -- # uname -s 00:05:50.202 23:02:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.202 23:02:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.202 23:02:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.202 23:02:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.202 23:02:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.202 23:02:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.202 23:02:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.202 23:02:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.202 23:02:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.202 23:02:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.202 23:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:50.202 23:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:50.202 23:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.202 23:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.202 23:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.202 23:02:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.202 23:02:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.202 23:02:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.202 23:02:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.202 23:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.202 23:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.202 23:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.202 23:02:12 -- paths/export.sh@5 -- # export PATH 00:05:50.202 23:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.202 23:02:12 -- nvmf/common.sh@46 -- # : 0 00:05:50.202 23:02:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:50.202 23:02:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:50.202 23:02:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:50.202 23:02:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.202 23:02:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.202 23:02:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:50.202 23:02:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:50.202 23:02:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:50.202 23:02:12 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:50.202 23:02:12 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:50.202 23:02:12 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:50.203 23:02:12 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.203 23:02:12 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.203 23:02:12 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:50.203 23:02:12 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.203 23:02:12 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:50.203 23:02:12 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.203 23:02:12 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:50.203 23:02:12 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:50.203 23:02:12 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:50.203 23:02:12 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:50.203 23:02:12 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.203 23:02:12 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:50.203 INFO: JSON configuration test init 00:05:50.203 23:02:12 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:50.203 23:02:12 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:50.203 23:02:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.203 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:50.203 23:02:12 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:50.203 23:02:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.203 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:50.203 23:02:12 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.203 23:02:12 -- json_config/json_config.sh@98 -- # local app=target 00:05:50.203 23:02:12 -- json_config/json_config.sh@99 -- # shift 00:05:50.203 23:02:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:50.203 23:02:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:50.203 23:02:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:50.203 23:02:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.203 23:02:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.203 23:02:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=2614531 00:05:50.203 23:02:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:50.203 Waiting for target to run... 00:05:50.203 23:02:12 -- json_config/json_config.sh@114 -- # waitforlisten 2614531 /var/tmp/spdk_tgt.sock 00:05:50.203 23:02:12 -- common/autotest_common.sh@819 -- # '[' -z 2614531 ']' 00:05:50.203 23:02:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.203 23:02:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.203 23:02:12 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.203 23:02:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.203 23:02:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.203 23:02:12 -- common/autotest_common.sh@10 -- # set +x 00:05:50.203 [2024-06-07 23:02:12.720794] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:50.203 [2024-06-07 23:02:12.720883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614531 ] 00:05:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.464 [2024-06-07 23:02:12.982096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.464 [2024-06-07 23:02:12.997637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.464 [2024-06-07 23:02:12.997761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.036 23:02:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.036 23:02:13 -- common/autotest_common.sh@852 -- # return 0 00:05:51.036 23:02:13 -- json_config/json_config.sh@115 -- # echo '' 00:05:51.036 00:05:51.036 23:02:13 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:51.036 23:02:13 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:51.036 23:02:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.036 23:02:13 -- common/autotest_common.sh@10 -- # set +x 00:05:51.036 23:02:13 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:51.036 23:02:13 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:51.036 23:02:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.036 23:02:13 -- common/autotest_common.sh@10 -- # set +x 00:05:51.036 23:02:13 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.036 23:02:13 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:51.036 23:02:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:51.608 23:02:14 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:51.608 23:02:14 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:51.608 23:02:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.608 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.609 23:02:14 -- json_config/json_config.sh@48 -- # local ret=0 00:05:51.609 23:02:14 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:51.609 23:02:14 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:51.609 23:02:14 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:51.609 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:51.609 23:02:14 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:51.609 23:02:14 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:51.609 23:02:14 -- json_config/json_config.sh@51 -- # local get_types 00:05:51.609 23:02:14 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:51.609 23:02:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.609 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.609 23:02:14 -- json_config/json_config.sh@58 -- # return 0 00:05:51.609 23:02:14 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:51.609 23:02:14 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:51.609 23:02:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.609 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.609 23:02:14 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:51.609 23:02:14 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:51.609 23:02:14 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.609 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.870 MallocForNvmf0 00:05:51.871 23:02:14 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.871 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.871 MallocForNvmf1 00:05:51.871 23:02:14 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.871 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.132 [2024-06-07 23:02:14.658391] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.132 23:02:14 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.132 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.393 23:02:14 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.393 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.393 23:02:14 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.393 23:02:14 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.654 23:02:15 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.654 23:02:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.654 [2024-06-07 23:02:15.260416] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.654 23:02:15 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:52.654 23:02:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.654 23:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.654 23:02:15 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:52.654 23:02:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.654 23:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.915 23:02:15 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:52.915 23:02:15 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.915 23:02:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.915 MallocBdevForConfigChangeCheck 00:05:52.915 23:02:15 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:52.915 23:02:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.915 23:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.915 23:02:15 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:52.915 23:02:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.175 23:02:15 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:53.175 INFO: shutting down applications... 00:05:53.175 23:02:15 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:53.175 23:02:15 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:53.176 23:02:15 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:53.176 23:02:15 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.747 Calling clear_iscsi_subsystem 00:05:53.747 Calling clear_nvmf_subsystem 00:05:53.747 Calling clear_nbd_subsystem 00:05:53.747 Calling clear_ublk_subsystem 00:05:53.747 Calling clear_vhost_blk_subsystem 00:05:53.747 Calling clear_vhost_scsi_subsystem 00:05:53.747 Calling clear_scheduler_subsystem 00:05:53.747 Calling clear_bdev_subsystem 00:05:53.747 Calling clear_accel_subsystem 00:05:53.747 Calling clear_vmd_subsystem 00:05:53.747 Calling clear_sock_subsystem 00:05:53.747 Calling clear_iobuf_subsystem 00:05:53.747 23:02:16 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.747 23:02:16 -- json_config/json_config.sh@396 -- # count=100 00:05:53.747 23:02:16 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:53.747 23:02:16 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.747 23:02:16 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.747 23:02:16 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.009 23:02:16 -- json_config/json_config.sh@398 -- # break 00:05:54.009 23:02:16 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:54.009 23:02:16 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:54.009 23:02:16 -- json_config/json_config.sh@120 -- # local app=target 00:05:54.009 23:02:16 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:54.009 23:02:16 -- json_config/json_config.sh@124 -- # [[ -n 2614531 ]] 00:05:54.009 23:02:16 -- json_config/json_config.sh@127 -- # kill -SIGINT 2614531 00:05:54.009 23:02:16 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:54.009 23:02:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.009 23:02:16 -- json_config/json_config.sh@130 -- # kill -0 2614531 00:05:54.009 23:02:16 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:54.582 23:02:17 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:54.582 23:02:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.582 23:02:17 -- json_config/json_config.sh@130 -- # kill -0 2614531 00:05:54.582 23:02:17 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:54.582 23:02:17 -- json_config/json_config.sh@132 -- # break 00:05:54.582 23:02:17 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:54.582 23:02:17 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:54.582 SPDK target shutdown done 00:05:54.582 23:02:17 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:54.582 INFO: relaunching applications... 00:05:54.582 23:02:17 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.582 23:02:17 -- json_config/json_config.sh@98 -- # local app=target 00:05:54.582 23:02:17 -- json_config/json_config.sh@99 -- # shift 00:05:54.582 23:02:17 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:54.582 23:02:17 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:54.582 23:02:17 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:54.582 23:02:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:54.582 23:02:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:54.582 23:02:17 -- json_config/json_config.sh@111 -- # app_pid[$app]=2615511 00:05:54.582 23:02:17 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:54.582 Waiting for target to run... 00:05:54.582 23:02:17 -- json_config/json_config.sh@114 -- # waitforlisten 2615511 /var/tmp/spdk_tgt.sock 00:05:54.582 23:02:17 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.582 23:02:17 -- common/autotest_common.sh@819 -- # '[' -z 2615511 ']' 00:05:54.582 23:02:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.582 23:02:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.582 23:02:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.582 23:02:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.582 23:02:17 -- common/autotest_common.sh@10 -- # set +x 00:05:54.582 [2024-06-07 23:02:17.094613] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:54.582 [2024-06-07 23:02:17.094689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2615511 ] 00:05:54.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.844 [2024-06-07 23:02:17.513655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.107 [2024-06-07 23:02:17.538650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.107 [2024-06-07 23:02:17.538810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.369 [2024-06-07 23:02:18.000557] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.369 [2024-06-07 23:02:18.032940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.941 23:02:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.941 23:02:18 -- common/autotest_common.sh@852 -- # return 0 00:05:55.941 23:02:18 -- json_config/json_config.sh@115 -- # echo '' 00:05:55.941 00:05:55.941 23:02:18 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:55.941 23:02:18 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.941 INFO: Checking if target configuration is the same... 00:05:55.941 23:02:18 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.941 23:02:18 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:55.941 23:02:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.941 + '[' 2 -ne 2 ']' 00:05:55.941 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.941 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.941 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.941 +++ basename /dev/fd/62 00:05:55.941 ++ mktemp /tmp/62.XXX 00:05:55.941 + tmp_file_1=/tmp/62.Xo4 00:05:55.941 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.941 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.941 + tmp_file_2=/tmp/spdk_tgt_config.json.Tmn 00:05:55.941 + ret=0 00:05:55.941 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.202 + diff -u /tmp/62.Xo4 /tmp/spdk_tgt_config.json.Tmn 00:05:56.202 + echo 'INFO: JSON config files are the same' 00:05:56.202 INFO: JSON config files are the same 00:05:56.202 + rm /tmp/62.Xo4 /tmp/spdk_tgt_config.json.Tmn 00:05:56.202 + exit 0 00:05:56.202 23:02:18 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:56.202 23:02:18 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:56.202 INFO: changing configuration and checking if this can be detected... 00:05:56.202 23:02:18 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.202 23:02:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.463 23:02:19 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.463 23:02:19 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:56.463 23:02:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.463 + '[' 2 -ne 2 ']' 00:05:56.463 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:56.463 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:56.463 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:56.463 +++ basename /dev/fd/62 00:05:56.463 ++ mktemp /tmp/62.XXX 00:05:56.463 + tmp_file_1=/tmp/62.ori 00:05:56.463 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.463 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.463 + tmp_file_2=/tmp/spdk_tgt_config.json.fL6 00:05:56.463 + ret=0 00:05:56.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.724 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.724 + diff -u /tmp/62.ori /tmp/spdk_tgt_config.json.fL6 00:05:56.724 + ret=1 00:05:56.724 + echo '=== Start of file: /tmp/62.ori ===' 00:05:56.724 + cat /tmp/62.ori 00:05:56.724 + echo '=== End of file: /tmp/62.ori ===' 00:05:56.724 + echo '' 00:05:56.724 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fL6 ===' 00:05:56.724 + cat /tmp/spdk_tgt_config.json.fL6 00:05:56.724 + echo '=== End of file: /tmp/spdk_tgt_config.json.fL6 ===' 00:05:56.724 + echo '' 00:05:56.724 + rm /tmp/62.ori /tmp/spdk_tgt_config.json.fL6 00:05:56.724 + exit 1 00:05:56.724 23:02:19 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:56.724 INFO: configuration change detected. 00:05:56.724 23:02:19 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:56.724 23:02:19 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:56.724 23:02:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:56.724 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.724 23:02:19 -- json_config/json_config.sh@360 -- # local ret=0 00:05:56.724 23:02:19 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:56.724 23:02:19 -- json_config/json_config.sh@370 -- # [[ -n 2615511 ]] 00:05:56.724 23:02:19 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:56.724 23:02:19 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.724 23:02:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:56.724 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.724 23:02:19 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:56.724 23:02:19 -- json_config/json_config.sh@246 -- # uname -s 00:05:56.724 23:02:19 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:56.724 23:02:19 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:56.724 23:02:19 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:56.724 23:02:19 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.724 23:02:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:56.724 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.724 23:02:19 -- json_config/json_config.sh@376 -- # killprocess 2615511 00:05:56.724 23:02:19 -- common/autotest_common.sh@926 -- # '[' -z 2615511 ']' 00:05:56.724 23:02:19 -- common/autotest_common.sh@930 -- # kill -0 2615511 00:05:56.724 23:02:19 -- common/autotest_common.sh@931 -- # uname 00:05:56.985 23:02:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.985 23:02:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2615511 00:05:56.985 23:02:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.985 23:02:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.985 23:02:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2615511' 00:05:56.985 killing process with pid 2615511 00:05:56.985 23:02:19 -- common/autotest_common.sh@945 -- # kill 2615511 00:05:56.985 23:02:19 -- common/autotest_common.sh@950 -- # wait 2615511 00:05:57.245 23:02:19 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.245 23:02:19 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:57.245 23:02:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:57.245 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.245 23:02:19 -- json_config/json_config.sh@381 -- # return 0 00:05:57.245 23:02:19 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:57.245 INFO: Success 00:05:57.245 00:05:57.245 real 0m7.210s 00:05:57.245 user 0m8.518s 00:05:57.245 sys 0m1.814s 00:05:57.245 23:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.245 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.245 ************************************ 00:05:57.245 END TEST json_config 00:05:57.245 ************************************ 00:05:57.245 23:02:19 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.245 23:02:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.245 23:02:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.245 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.245 ************************************ 00:05:57.245 START TEST json_config_extra_key 00:05:57.245 ************************************ 00:05:57.245 23:02:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.245 23:02:19 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.245 23:02:19 -- nvmf/common.sh@7 -- # uname -s 00:05:57.245 23:02:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.245 23:02:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.245 23:02:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.245 23:02:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.245 23:02:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.245 23:02:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.245 23:02:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.245 23:02:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.245 23:02:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.245 23:02:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.245 23:02:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:57.245 23:02:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:57.245 23:02:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.245 23:02:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.245 23:02:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.245 23:02:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.245 23:02:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.245 23:02:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.245 23:02:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.245 23:02:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.245 23:02:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.245 23:02:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.245 23:02:19 -- paths/export.sh@5 -- # export PATH 00:05:57.245 23:02:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.245 23:02:19 -- nvmf/common.sh@46 -- # : 0 00:05:57.245 23:02:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:57.245 23:02:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:57.245 23:02:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:57.245 23:02:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.245 23:02:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.245 23:02:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:57.245 23:02:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:57.246 23:02:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:57.246 INFO: launching applications... 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2616299 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:57.246 Waiting for target to run... 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2616299 /var/tmp/spdk_tgt.sock 00:05:57.246 23:02:19 -- common/autotest_common.sh@819 -- # '[' -z 2616299 ']' 00:05:57.246 23:02:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.246 23:02:19 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.246 23:02:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.246 23:02:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.246 23:02:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.246 23:02:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.506 [2024-06-07 23:02:19.967638] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.506 [2024-06-07 23:02:19.967698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616299 ] 00:05:57.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.767 [2024-06-07 23:02:20.292087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.767 [2024-06-07 23:02:20.309071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.767 [2024-06-07 23:02:20.309198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.340 23:02:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.340 23:02:20 -- common/autotest_common.sh@852 -- # return 0 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:58.340 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:58.340 INFO: shutting down applications... 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2616299 ]] 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2616299 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2616299 00:05:58.340 23:02:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2616299 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:58.602 SPDK target shutdown done 00:05:58.602 23:02:21 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:58.602 Success 00:05:58.602 00:05:58.602 real 0m1.420s 00:05:58.602 user 0m1.010s 00:05:58.602 sys 0m0.407s 00:05:58.602 23:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.602 23:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.602 ************************************ 00:05:58.602 END TEST json_config_extra_key 00:05:58.602 ************************************ 00:05:58.602 23:02:21 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.602 23:02:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.602 23:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.602 23:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.602 ************************************ 00:05:58.602 START TEST alias_rpc 00:05:58.602 ************************************ 00:05:58.602 23:02:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.863 * Looking for test storage... 00:05:58.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.863 23:02:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.863 23:02:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2616661 00:05:58.863 23:02:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2616661 00:05:58.863 23:02:21 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.863 23:02:21 -- common/autotest_common.sh@819 -- # '[' -z 2616661 ']' 00:05:58.863 23:02:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.863 23:02:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.863 23:02:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.863 23:02:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.863 23:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.863 [2024-06-07 23:02:21.431218] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:58.863 [2024-06-07 23:02:21.431313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616661 ] 00:05:58.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.863 [2024-06-07 23:02:21.493422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.863 [2024-06-07 23:02:21.522378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.863 [2024-06-07 23:02:21.522512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.807 23:02:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.807 23:02:22 -- common/autotest_common.sh@852 -- # return 0 00:05:59.807 23:02:22 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.807 23:02:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2616661 00:05:59.807 23:02:22 -- common/autotest_common.sh@926 -- # '[' -z 2616661 ']' 00:05:59.807 23:02:22 -- common/autotest_common.sh@930 -- # kill -0 2616661 00:05:59.807 23:02:22 -- common/autotest_common.sh@931 -- # uname 00:05:59.807 23:02:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.807 23:02:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2616661 00:05:59.807 23:02:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.807 23:02:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.807 23:02:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2616661' 00:05:59.807 killing process with pid 2616661 00:05:59.807 23:02:22 -- common/autotest_common.sh@945 -- # kill 2616661 00:05:59.807 23:02:22 -- common/autotest_common.sh@950 -- # wait 2616661 00:06:00.068 00:06:00.068 real 0m1.339s 00:06:00.068 user 0m1.465s 00:06:00.068 sys 0m0.367s 00:06:00.068 23:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.068 23:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.068 ************************************ 00:06:00.068 END TEST alias_rpc 00:06:00.068 ************************************ 00:06:00.068 23:02:22 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:00.068 23:02:22 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.068 23:02:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.068 23:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.068 23:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.068 ************************************ 00:06:00.069 START TEST spdkcli_tcp 00:06:00.069 ************************************ 00:06:00.069 23:02:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.069 * Looking for test storage... 00:06:00.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.330 23:02:22 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.330 23:02:22 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.330 23:02:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.330 23:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2616905 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@27 -- # waitforlisten 2616905 00:06:00.330 23:02:22 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.330 23:02:22 -- common/autotest_common.sh@819 -- # '[' -z 2616905 ']' 00:06:00.330 23:02:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.330 23:02:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.330 23:02:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.330 23:02:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.330 23:02:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.330 [2024-06-07 23:02:22.818316] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:00.330 [2024-06-07 23:02:22.818395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616905 ] 00:06:00.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.330 [2024-06-07 23:02:22.884312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.330 [2024-06-07 23:02:22.922840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.330 [2024-06-07 23:02:22.923111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.331 [2024-06-07 23:02:22.923113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.274 23:02:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.274 23:02:23 -- common/autotest_common.sh@852 -- # return 0 00:06:01.274 23:02:23 -- spdkcli/tcp.sh@31 -- # socat_pid=2617092 00:06:01.275 23:02:23 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.275 23:02:23 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.275 [ 00:06:01.275 "bdev_malloc_delete", 00:06:01.275 "bdev_malloc_create", 00:06:01.275 "bdev_null_resize", 00:06:01.275 "bdev_null_delete", 00:06:01.275 "bdev_null_create", 00:06:01.275 "bdev_nvme_cuse_unregister", 00:06:01.275 "bdev_nvme_cuse_register", 00:06:01.275 "bdev_opal_new_user", 00:06:01.275 "bdev_opal_set_lock_state", 00:06:01.275 "bdev_opal_delete", 00:06:01.275 "bdev_opal_get_info", 00:06:01.275 "bdev_opal_create", 00:06:01.275 "bdev_nvme_opal_revert", 00:06:01.275 "bdev_nvme_opal_init", 00:06:01.275 "bdev_nvme_send_cmd", 00:06:01.275 "bdev_nvme_get_path_iostat", 00:06:01.275 "bdev_nvme_get_mdns_discovery_info", 00:06:01.275 "bdev_nvme_stop_mdns_discovery", 00:06:01.275 "bdev_nvme_start_mdns_discovery", 00:06:01.275 "bdev_nvme_set_multipath_policy", 00:06:01.275 "bdev_nvme_set_preferred_path", 00:06:01.275 "bdev_nvme_get_io_paths", 00:06:01.275 "bdev_nvme_remove_error_injection", 00:06:01.275 "bdev_nvme_add_error_injection", 00:06:01.275 "bdev_nvme_get_discovery_info", 00:06:01.275 "bdev_nvme_stop_discovery", 00:06:01.275 "bdev_nvme_start_discovery", 00:06:01.275 "bdev_nvme_get_controller_health_info", 00:06:01.275 "bdev_nvme_disable_controller", 00:06:01.275 "bdev_nvme_enable_controller", 00:06:01.275 "bdev_nvme_reset_controller", 00:06:01.275 "bdev_nvme_get_transport_statistics", 00:06:01.275 "bdev_nvme_apply_firmware", 00:06:01.275 "bdev_nvme_detach_controller", 00:06:01.275 "bdev_nvme_get_controllers", 00:06:01.275 "bdev_nvme_attach_controller", 00:06:01.275 "bdev_nvme_set_hotplug", 00:06:01.275 "bdev_nvme_set_options", 00:06:01.275 "bdev_passthru_delete", 00:06:01.275 "bdev_passthru_create", 00:06:01.275 "bdev_lvol_grow_lvstore", 00:06:01.275 "bdev_lvol_get_lvols", 00:06:01.275 "bdev_lvol_get_lvstores", 00:06:01.275 "bdev_lvol_delete", 00:06:01.275 "bdev_lvol_set_read_only", 00:06:01.275 "bdev_lvol_resize", 00:06:01.275 "bdev_lvol_decouple_parent", 00:06:01.275 "bdev_lvol_inflate", 00:06:01.275 "bdev_lvol_rename", 00:06:01.275 "bdev_lvol_clone_bdev", 00:06:01.275 "bdev_lvol_clone", 00:06:01.275 "bdev_lvol_snapshot", 00:06:01.275 "bdev_lvol_create", 00:06:01.275 "bdev_lvol_delete_lvstore", 00:06:01.275 "bdev_lvol_rename_lvstore", 00:06:01.275 "bdev_lvol_create_lvstore", 00:06:01.275 "bdev_raid_set_options", 00:06:01.275 "bdev_raid_remove_base_bdev", 00:06:01.275 "bdev_raid_add_base_bdev", 00:06:01.275 "bdev_raid_delete", 00:06:01.275 "bdev_raid_create", 00:06:01.275 "bdev_raid_get_bdevs", 00:06:01.275 "bdev_error_inject_error", 00:06:01.275 "bdev_error_delete", 00:06:01.275 "bdev_error_create", 00:06:01.275 "bdev_split_delete", 00:06:01.275 "bdev_split_create", 00:06:01.275 "bdev_delay_delete", 00:06:01.275 "bdev_delay_create", 00:06:01.275 "bdev_delay_update_latency", 00:06:01.275 "bdev_zone_block_delete", 00:06:01.275 "bdev_zone_block_create", 00:06:01.275 "blobfs_create", 00:06:01.275 "blobfs_detect", 00:06:01.275 "blobfs_set_cache_size", 00:06:01.275 "bdev_aio_delete", 00:06:01.275 "bdev_aio_rescan", 00:06:01.275 "bdev_aio_create", 00:06:01.275 "bdev_ftl_set_property", 00:06:01.275 "bdev_ftl_get_properties", 00:06:01.275 "bdev_ftl_get_stats", 00:06:01.275 "bdev_ftl_unmap", 00:06:01.275 "bdev_ftl_unload", 00:06:01.275 "bdev_ftl_delete", 00:06:01.275 "bdev_ftl_load", 00:06:01.275 "bdev_ftl_create", 00:06:01.275 "bdev_virtio_attach_controller", 00:06:01.275 "bdev_virtio_scsi_get_devices", 00:06:01.275 "bdev_virtio_detach_controller", 00:06:01.275 "bdev_virtio_blk_set_hotplug", 00:06:01.275 "bdev_iscsi_delete", 00:06:01.275 "bdev_iscsi_create", 00:06:01.275 "bdev_iscsi_set_options", 00:06:01.275 "accel_error_inject_error", 00:06:01.275 "ioat_scan_accel_module", 00:06:01.275 "dsa_scan_accel_module", 00:06:01.275 "iaa_scan_accel_module", 00:06:01.275 "vfu_virtio_create_scsi_endpoint", 00:06:01.275 "vfu_virtio_scsi_remove_target", 00:06:01.275 "vfu_virtio_scsi_add_target", 00:06:01.275 "vfu_virtio_create_blk_endpoint", 00:06:01.275 "vfu_virtio_delete_endpoint", 00:06:01.275 "iscsi_set_options", 00:06:01.275 "iscsi_get_auth_groups", 00:06:01.275 "iscsi_auth_group_remove_secret", 00:06:01.275 "iscsi_auth_group_add_secret", 00:06:01.275 "iscsi_delete_auth_group", 00:06:01.275 "iscsi_create_auth_group", 00:06:01.275 "iscsi_set_discovery_auth", 00:06:01.275 "iscsi_get_options", 00:06:01.275 "iscsi_target_node_request_logout", 00:06:01.275 "iscsi_target_node_set_redirect", 00:06:01.275 "iscsi_target_node_set_auth", 00:06:01.275 "iscsi_target_node_add_lun", 00:06:01.275 "iscsi_get_connections", 00:06:01.275 "iscsi_portal_group_set_auth", 00:06:01.275 "iscsi_start_portal_group", 00:06:01.275 "iscsi_delete_portal_group", 00:06:01.275 "iscsi_create_portal_group", 00:06:01.275 "iscsi_get_portal_groups", 00:06:01.275 "iscsi_delete_target_node", 00:06:01.275 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.275 "iscsi_target_node_add_pg_ig_maps", 00:06:01.275 "iscsi_create_target_node", 00:06:01.275 "iscsi_get_target_nodes", 00:06:01.275 "iscsi_delete_initiator_group", 00:06:01.275 "iscsi_initiator_group_remove_initiators", 00:06:01.275 "iscsi_initiator_group_add_initiators", 00:06:01.275 "iscsi_create_initiator_group", 00:06:01.275 "iscsi_get_initiator_groups", 00:06:01.275 "nvmf_set_crdt", 00:06:01.275 "nvmf_set_config", 00:06:01.275 "nvmf_set_max_subsystems", 00:06:01.275 "nvmf_subsystem_get_listeners", 00:06:01.275 "nvmf_subsystem_get_qpairs", 00:06:01.275 "nvmf_subsystem_get_controllers", 00:06:01.275 "nvmf_get_stats", 00:06:01.275 "nvmf_get_transports", 00:06:01.275 "nvmf_create_transport", 00:06:01.275 "nvmf_get_targets", 00:06:01.275 "nvmf_delete_target", 00:06:01.275 "nvmf_create_target", 00:06:01.275 "nvmf_subsystem_allow_any_host", 00:06:01.275 "nvmf_subsystem_remove_host", 00:06:01.275 "nvmf_subsystem_add_host", 00:06:01.275 "nvmf_subsystem_remove_ns", 00:06:01.275 "nvmf_subsystem_add_ns", 00:06:01.275 "nvmf_subsystem_listener_set_ana_state", 00:06:01.275 "nvmf_discovery_get_referrals", 00:06:01.275 "nvmf_discovery_remove_referral", 00:06:01.275 "nvmf_discovery_add_referral", 00:06:01.275 "nvmf_subsystem_remove_listener", 00:06:01.275 "nvmf_subsystem_add_listener", 00:06:01.275 "nvmf_delete_subsystem", 00:06:01.275 "nvmf_create_subsystem", 00:06:01.275 "nvmf_get_subsystems", 00:06:01.275 "env_dpdk_get_mem_stats", 00:06:01.275 "nbd_get_disks", 00:06:01.275 "nbd_stop_disk", 00:06:01.275 "nbd_start_disk", 00:06:01.275 "ublk_recover_disk", 00:06:01.275 "ublk_get_disks", 00:06:01.275 "ublk_stop_disk", 00:06:01.275 "ublk_start_disk", 00:06:01.275 "ublk_destroy_target", 00:06:01.275 "ublk_create_target", 00:06:01.275 "virtio_blk_create_transport", 00:06:01.275 "virtio_blk_get_transports", 00:06:01.275 "vhost_controller_set_coalescing", 00:06:01.275 "vhost_get_controllers", 00:06:01.275 "vhost_delete_controller", 00:06:01.275 "vhost_create_blk_controller", 00:06:01.275 "vhost_scsi_controller_remove_target", 00:06:01.275 "vhost_scsi_controller_add_target", 00:06:01.275 "vhost_start_scsi_controller", 00:06:01.275 "vhost_create_scsi_controller", 00:06:01.275 "thread_set_cpumask", 00:06:01.275 "framework_get_scheduler", 00:06:01.275 "framework_set_scheduler", 00:06:01.275 "framework_get_reactors", 00:06:01.275 "thread_get_io_channels", 00:06:01.275 "thread_get_pollers", 00:06:01.275 "thread_get_stats", 00:06:01.275 "framework_monitor_context_switch", 00:06:01.275 "spdk_kill_instance", 00:06:01.275 "log_enable_timestamps", 00:06:01.275 "log_get_flags", 00:06:01.275 "log_clear_flag", 00:06:01.275 "log_set_flag", 00:06:01.275 "log_get_level", 00:06:01.275 "log_set_level", 00:06:01.275 "log_get_print_level", 00:06:01.275 "log_set_print_level", 00:06:01.275 "framework_enable_cpumask_locks", 00:06:01.275 "framework_disable_cpumask_locks", 00:06:01.275 "framework_wait_init", 00:06:01.275 "framework_start_init", 00:06:01.275 "scsi_get_devices", 00:06:01.275 "bdev_get_histogram", 00:06:01.275 "bdev_enable_histogram", 00:06:01.275 "bdev_set_qos_limit", 00:06:01.275 "bdev_set_qd_sampling_period", 00:06:01.275 "bdev_get_bdevs", 00:06:01.275 "bdev_reset_iostat", 00:06:01.275 "bdev_get_iostat", 00:06:01.275 "bdev_examine", 00:06:01.275 "bdev_wait_for_examine", 00:06:01.275 "bdev_set_options", 00:06:01.275 "notify_get_notifications", 00:06:01.275 "notify_get_types", 00:06:01.275 "accel_get_stats", 00:06:01.275 "accel_set_options", 00:06:01.275 "accel_set_driver", 00:06:01.275 "accel_crypto_key_destroy", 00:06:01.275 "accel_crypto_keys_get", 00:06:01.275 "accel_crypto_key_create", 00:06:01.275 "accel_assign_opc", 00:06:01.275 "accel_get_module_info", 00:06:01.275 "accel_get_opc_assignments", 00:06:01.275 "vmd_rescan", 00:06:01.275 "vmd_remove_device", 00:06:01.275 "vmd_enable", 00:06:01.275 "sock_set_default_impl", 00:06:01.275 "sock_impl_set_options", 00:06:01.275 "sock_impl_get_options", 00:06:01.275 "iobuf_get_stats", 00:06:01.275 "iobuf_set_options", 00:06:01.275 "framework_get_pci_devices", 00:06:01.275 "framework_get_config", 00:06:01.275 "framework_get_subsystems", 00:06:01.275 "vfu_tgt_set_base_path", 00:06:01.275 "trace_get_info", 00:06:01.275 "trace_get_tpoint_group_mask", 00:06:01.275 "trace_disable_tpoint_group", 00:06:01.276 "trace_enable_tpoint_group", 00:06:01.276 "trace_clear_tpoint_mask", 00:06:01.276 "trace_set_tpoint_mask", 00:06:01.276 "spdk_get_version", 00:06:01.276 "rpc_get_methods" 00:06:01.276 ] 00:06:01.276 23:02:23 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.276 23:02:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:01.276 23:02:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.276 23:02:23 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.276 23:02:23 -- spdkcli/tcp.sh@38 -- # killprocess 2616905 00:06:01.276 23:02:23 -- common/autotest_common.sh@926 -- # '[' -z 2616905 ']' 00:06:01.276 23:02:23 -- common/autotest_common.sh@930 -- # kill -0 2616905 00:06:01.276 23:02:23 -- common/autotest_common.sh@931 -- # uname 00:06:01.276 23:02:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.276 23:02:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2616905 00:06:01.276 23:02:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.276 23:02:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.276 23:02:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2616905' 00:06:01.276 killing process with pid 2616905 00:06:01.276 23:02:23 -- common/autotest_common.sh@945 -- # kill 2616905 00:06:01.276 23:02:23 -- common/autotest_common.sh@950 -- # wait 2616905 00:06:01.539 00:06:01.539 real 0m1.363s 00:06:01.539 user 0m2.537s 00:06:01.539 sys 0m0.408s 00:06:01.539 23:02:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.539 23:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:01.539 ************************************ 00:06:01.539 END TEST spdkcli_tcp 00:06:01.539 ************************************ 00:06:01.539 23:02:24 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.539 23:02:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.539 23:02:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.539 23:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:01.539 ************************************ 00:06:01.539 START TEST dpdk_mem_utility 00:06:01.539 ************************************ 00:06:01.539 23:02:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.539 * Looking for test storage... 00:06:01.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.539 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.539 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2617167 00:06:01.539 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2617167 00:06:01.539 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.539 23:02:24 -- common/autotest_common.sh@819 -- # '[' -z 2617167 ']' 00:06:01.539 23:02:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.539 23:02:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.539 23:02:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.539 23:02:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.539 23:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:01.539 [2024-06-07 23:02:24.211559] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:01.539 [2024-06-07 23:02:24.211616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617167 ] 00:06:01.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.829 [2024-06-07 23:02:24.281901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.829 [2024-06-07 23:02:24.316439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.829 [2024-06-07 23:02:24.316549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.457 23:02:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.457 23:02:24 -- common/autotest_common.sh@852 -- # return 0 00:06:02.457 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.457 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.457 23:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.457 23:02:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.457 { 00:06:02.457 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.457 } 00:06:02.457 23:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.457 23:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.457 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.457 1 heaps totaling size 814.000000 MiB 00:06:02.457 size: 814.000000 MiB heap id: 0 00:06:02.457 end heaps---------- 00:06:02.457 8 mempools totaling size 598.116089 MiB 00:06:02.457 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.457 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.457 size: 84.521057 MiB name: bdev_io_2617167 00:06:02.457 size: 51.011292 MiB name: evtpool_2617167 00:06:02.457 size: 50.003479 MiB name: msgpool_2617167 00:06:02.457 size: 21.763794 MiB name: PDU_Pool 00:06:02.457 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.457 size: 0.026123 MiB name: Session_Pool 00:06:02.457 end mempools------- 00:06:02.457 6 memzones totaling size 4.142822 MiB 00:06:02.457 size: 1.000366 MiB name: RG_ring_0_2617167 00:06:02.457 size: 1.000366 MiB name: RG_ring_1_2617167 00:06:02.457 size: 1.000366 MiB name: RG_ring_4_2617167 00:06:02.457 size: 1.000366 MiB name: RG_ring_5_2617167 00:06:02.457 size: 0.125366 MiB name: RG_ring_2_2617167 00:06:02.457 size: 0.015991 MiB name: RG_ring_3_2617167 00:06:02.457 end memzones------- 00:06:02.457 23:02:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.457 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:02.457 list of free elements. size: 12.519348 MiB 00:06:02.457 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.457 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.457 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.457 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.457 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.457 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.457 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.457 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.457 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:02.457 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:02.457 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:02.457 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:02.457 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.457 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:02.457 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:02.457 list of standard malloc elements. size: 199.218079 MiB 00:06:02.457 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.457 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.457 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.457 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.457 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.457 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.457 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.457 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.457 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.457 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.457 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.457 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.457 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.457 list of memzone associated elements. size: 602.262573 MiB 00:06:02.457 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.457 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.457 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.458 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.458 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.458 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2617167_0 00:06:02.458 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.458 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2617167_0 00:06:02.458 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.458 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2617167_0 00:06:02.458 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.458 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.458 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.458 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.458 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.458 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2617167 00:06:02.458 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.458 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2617167 00:06:02.458 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.458 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2617167 00:06:02.458 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.458 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.458 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.458 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.458 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.458 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.458 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2617167 00:06:02.458 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.458 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2617167 00:06:02.458 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.458 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2617167 00:06:02.458 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.458 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2617167 00:06:02.458 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.458 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2617167 00:06:02.458 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.458 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.458 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.458 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.458 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.458 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.458 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.458 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2617167 00:06:02.458 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.458 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.458 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:02.458 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.458 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.458 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2617167 00:06:02.458 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:02.458 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.458 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:02.458 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2617167 00:06:02.458 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.458 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2617167 00:06:02.458 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:02.458 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.458 23:02:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.458 23:02:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2617167 00:06:02.458 23:02:25 -- common/autotest_common.sh@926 -- # '[' -z 2617167 ']' 00:06:02.458 23:02:25 -- common/autotest_common.sh@930 -- # kill -0 2617167 00:06:02.458 23:02:25 -- common/autotest_common.sh@931 -- # uname 00:06:02.458 23:02:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.458 23:02:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2617167 00:06:02.458 23:02:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:02.458 23:02:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:02.458 23:02:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2617167' 00:06:02.458 killing process with pid 2617167 00:06:02.458 23:02:25 -- common/autotest_common.sh@945 -- # kill 2617167 00:06:02.458 23:02:25 -- common/autotest_common.sh@950 -- # wait 2617167 00:06:02.718 00:06:02.718 real 0m1.241s 00:06:02.718 user 0m1.295s 00:06:02.718 sys 0m0.374s 00:06:02.718 23:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.718 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.718 ************************************ 00:06:02.718 END TEST dpdk_mem_utility 00:06:02.718 ************************************ 00:06:02.718 23:02:25 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.718 23:02:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.718 23:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.718 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.718 ************************************ 00:06:02.718 START TEST event 00:06:02.718 ************************************ 00:06:02.718 23:02:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.978 * Looking for test storage... 00:06:02.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.978 23:02:25 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.978 23:02:25 -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.978 23:02:25 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.978 23:02:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:02.978 23:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.978 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.978 ************************************ 00:06:02.978 START TEST event_perf 00:06:02.978 ************************************ 00:06:02.978 23:02:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.978 Running I/O for 1 seconds...[2024-06-07 23:02:25.475062] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:02.978 [2024-06-07 23:02:25.475160] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617550 ] 00:06:02.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.978 [2024-06-07 23:02:25.540775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.978 [2024-06-07 23:02:25.574261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.978 [2024-06-07 23:02:25.574301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.978 [2024-06-07 23:02:25.574463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.978 [2024-06-07 23:02:25.574552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.363 Running I/O for 1 seconds... 00:06:04.363 lcore 0: 171180 00:06:04.363 lcore 1: 171182 00:06:04.363 lcore 2: 171175 00:06:04.363 lcore 3: 171178 00:06:04.363 done. 00:06:04.363 00:06:04.363 real 0m1.160s 00:06:04.363 user 0m4.075s 00:06:04.363 sys 0m0.078s 00:06:04.363 23:02:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.363 23:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.363 ************************************ 00:06:04.363 END TEST event_perf 00:06:04.363 ************************************ 00:06:04.363 23:02:26 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.363 23:02:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:04.363 23:02:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.363 23:02:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.363 ************************************ 00:06:04.363 START TEST event_reactor 00:06:04.363 ************************************ 00:06:04.363 23:02:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.363 [2024-06-07 23:02:26.677132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:04.363 [2024-06-07 23:02:26.677227] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617909 ] 00:06:04.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.363 [2024-06-07 23:02:26.739695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.363 [2024-06-07 23:02:26.767785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.305 test_start 00:06:05.305 oneshot 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 500 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 test_end 00:06:05.305 00:06:05.305 real 0m1.149s 00:06:05.305 user 0m1.078s 00:06:05.305 sys 0m0.067s 00:06:05.305 23:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.305 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:05.305 ************************************ 00:06:05.305 END TEST event_reactor 00:06:05.305 ************************************ 00:06:05.305 23:02:27 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.305 23:02:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:05.305 23:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.305 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:06:05.305 ************************************ 00:06:05.305 START TEST event_reactor_perf 00:06:05.305 ************************************ 00:06:05.305 23:02:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.305 [2024-06-07 23:02:27.867225] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:05.305 [2024-06-07 23:02:27.867332] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618122 ] 00:06:05.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.305 [2024-06-07 23:02:27.940618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.306 [2024-06-07 23:02:27.971588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.694 test_start 00:06:06.694 test_end 00:06:06.694 Performance: 363665 events per second 00:06:06.694 00:06:06.694 real 0m1.162s 00:06:06.694 user 0m1.080s 00:06:06.694 sys 0m0.078s 00:06:06.694 23:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.694 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.694 ************************************ 00:06:06.694 END TEST event_reactor_perf 00:06:06.694 ************************************ 00:06:06.694 23:02:29 -- event/event.sh@49 -- # uname -s 00:06:06.694 23:02:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.694 23:02:29 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.694 23:02:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.694 23:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.694 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.694 ************************************ 00:06:06.694 START TEST event_scheduler 00:06:06.694 ************************************ 00:06:06.694 23:02:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.694 * Looking for test storage... 00:06:06.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:06.694 23:02:29 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.694 23:02:29 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2618330 00:06:06.694 23:02:29 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.694 23:02:29 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.694 23:02:29 -- scheduler/scheduler.sh@37 -- # waitforlisten 2618330 00:06:06.694 23:02:29 -- common/autotest_common.sh@819 -- # '[' -z 2618330 ']' 00:06:06.694 23:02:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.694 23:02:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.694 23:02:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.694 23:02:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.694 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.694 [2024-06-07 23:02:29.199453] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.694 [2024-06-07 23:02:29.199530] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618330 ] 00:06:06.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.694 [2024-06-07 23:02:29.256633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.694 [2024-06-07 23:02:29.293173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.694 [2024-06-07 23:02:29.293339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.694 [2024-06-07 23:02:29.293662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.694 [2024-06-07 23:02:29.293663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.640 23:02:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.640 23:02:29 -- common/autotest_common.sh@852 -- # return 0 00:06:07.640 23:02:29 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.640 23:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 POWER: Env isn't set yet! 00:06:07.640 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:07.640 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.640 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.640 POWER: Attempting to initialise PSTAT power management... 00:06:07.640 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:07.640 POWER: Initialized successfully for lcore 0 power management 00:06:07.640 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:07.640 POWER: Initialized successfully for lcore 1 power management 00:06:07.640 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:07.640 POWER: Initialized successfully for lcore 2 power management 00:06:07.640 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:07.640 POWER: Initialized successfully for lcore 3 power management 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 [2024-06-07 23:02:30.074164] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.640 23:02:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.640 23:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 ************************************ 00:06:07.640 START TEST scheduler_create_thread 00:06:07.640 ************************************ 00:06:07.640 23:02:30 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 2 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 3 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 4 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.640 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.640 5 00:06:07.640 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.640 23:02:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.640 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 6 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 7 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 8 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 9 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 10 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.641 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.641 23:02:30 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.641 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:07.641 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.214 23:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.214 23:02:30 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.214 23:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.214 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:09.601 23:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.601 23:02:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.601 23:02:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.601 23:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.601 23:02:32 -- common/autotest_common.sh@10 -- # set +x 00:06:10.543 23:02:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.543 00:06:10.543 real 0m3.102s 00:06:10.543 user 0m0.024s 00:06:10.543 sys 0m0.007s 00:06:10.543 23:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.543 23:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.543 ************************************ 00:06:10.543 END TEST scheduler_create_thread 00:06:10.543 ************************************ 00:06:10.543 23:02:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:10.543 23:02:33 -- scheduler/scheduler.sh@46 -- # killprocess 2618330 00:06:10.543 23:02:33 -- common/autotest_common.sh@926 -- # '[' -z 2618330 ']' 00:06:10.543 23:02:33 -- common/autotest_common.sh@930 -- # kill -0 2618330 00:06:10.804 23:02:33 -- common/autotest_common.sh@931 -- # uname 00:06:10.804 23:02:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.804 23:02:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2618330 00:06:10.804 23:02:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:10.804 23:02:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:10.804 23:02:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2618330' 00:06:10.804 killing process with pid 2618330 00:06:10.804 23:02:33 -- common/autotest_common.sh@945 -- # kill 2618330 00:06:10.804 23:02:33 -- common/autotest_common.sh@950 -- # wait 2618330 00:06:11.066 [2024-06-07 23:02:33.564896] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.066 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:11.066 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:11.066 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:11.066 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:11.066 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:11.066 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:11.066 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:11.066 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:11.066 00:06:11.066 real 0m4.643s 00:06:11.066 user 0m9.050s 00:06:11.066 sys 0m0.332s 00:06:11.066 23:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.066 23:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.066 ************************************ 00:06:11.066 END TEST event_scheduler 00:06:11.066 ************************************ 00:06:11.066 23:02:33 -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.066 23:02:33 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.066 23:02:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.066 23:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.066 23:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.327 ************************************ 00:06:11.327 START TEST app_repeat 00:06:11.327 ************************************ 00:06:11.327 23:02:33 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:11.327 23:02:33 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.327 23:02:33 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.327 23:02:33 -- event/event.sh@13 -- # local nbd_list 00:06:11.327 23:02:33 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.327 23:02:33 -- event/event.sh@14 -- # local bdev_list 00:06:11.327 23:02:33 -- event/event.sh@15 -- # local repeat_times=4 00:06:11.327 23:02:33 -- event/event.sh@17 -- # modprobe nbd 00:06:11.327 23:02:33 -- event/event.sh@19 -- # repeat_pid=2619378 00:06:11.327 23:02:33 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.327 23:02:33 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.327 23:02:33 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2619378' 00:06:11.327 Process app_repeat pid: 2619378 00:06:11.327 23:02:33 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.327 23:02:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.327 spdk_app_start Round 0 00:06:11.327 23:02:33 -- event/event.sh@25 -- # waitforlisten 2619378 /var/tmp/spdk-nbd.sock 00:06:11.327 23:02:33 -- common/autotest_common.sh@819 -- # '[' -z 2619378 ']' 00:06:11.327 23:02:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.327 23:02:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.327 23:02:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.328 23:02:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.328 23:02:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.328 [2024-06-07 23:02:33.786536] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:11.328 [2024-06-07 23:02:33.786625] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619378 ] 00:06:11.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.328 [2024-06-07 23:02:33.849271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.328 [2024-06-07 23:02:33.878507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.328 [2024-06-07 23:02:33.878596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.899 23:02:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.899 23:02:34 -- common/autotest_common.sh@852 -- # return 0 00:06:11.899 23:02:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.160 Malloc0 00:06:12.160 23:02:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.420 Malloc1 00:06:12.420 23:02:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@12 -- # local i 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.420 23:02:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.420 /dev/nbd0 00:06:12.420 23:02:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.420 23:02:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.420 23:02:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.420 23:02:35 -- common/autotest_common.sh@857 -- # local i 00:06:12.420 23:02:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.420 23:02:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.421 23:02:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.421 23:02:35 -- common/autotest_common.sh@861 -- # break 00:06:12.421 23:02:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.421 23:02:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.421 23:02:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.421 1+0 records in 00:06:12.421 1+0 records out 00:06:12.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244317 s, 16.8 MB/s 00:06:12.421 23:02:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.421 23:02:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.421 23:02:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.421 23:02:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.421 23:02:35 -- common/autotest_common.sh@877 -- # return 0 00:06:12.421 23:02:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.421 23:02:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.421 23:02:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.681 /dev/nbd1 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.681 23:02:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.681 23:02:35 -- common/autotest_common.sh@857 -- # local i 00:06:12.681 23:02:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.681 23:02:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.681 23:02:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:12.681 23:02:35 -- common/autotest_common.sh@861 -- # break 00:06:12.681 23:02:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.681 23:02:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.681 23:02:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.681 1+0 records in 00:06:12.681 1+0 records out 00:06:12.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246993 s, 16.6 MB/s 00:06:12.681 23:02:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.681 23:02:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.681 23:02:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.681 23:02:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.681 23:02:35 -- common/autotest_common.sh@877 -- # return 0 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.681 23:02:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.942 { 00:06:12.942 "nbd_device": "/dev/nbd0", 00:06:12.942 "bdev_name": "Malloc0" 00:06:12.942 }, 00:06:12.942 { 00:06:12.942 "nbd_device": "/dev/nbd1", 00:06:12.942 "bdev_name": "Malloc1" 00:06:12.942 } 00:06:12.942 ]' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.942 { 00:06:12.942 "nbd_device": "/dev/nbd0", 00:06:12.942 "bdev_name": "Malloc0" 00:06:12.942 }, 00:06:12.942 { 00:06:12.942 "nbd_device": "/dev/nbd1", 00:06:12.942 "bdev_name": "Malloc1" 00:06:12.942 } 00:06:12.942 ]' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.942 /dev/nbd1' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.942 /dev/nbd1' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.942 256+0 records in 00:06:12.942 256+0 records out 00:06:12.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116167 s, 90.3 MB/s 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.942 256+0 records in 00:06:12.942 256+0 records out 00:06:12.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180072 s, 58.2 MB/s 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.942 256+0 records in 00:06:12.942 256+0 records out 00:06:12.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017186 s, 61.0 MB/s 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.942 23:02:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@41 -- # break 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@41 -- # break 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.281 23:02:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@65 -- # true 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.541 23:02:36 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.541 23:02:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.541 23:02:36 -- event/event.sh@35 -- # sleep 3 00:06:13.801 [2024-06-07 23:02:36.337021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.801 [2024-06-07 23:02:36.364761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.801 [2024-06-07 23:02:36.364762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.801 [2024-06-07 23:02:36.396338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.801 [2024-06-07 23:02:36.396374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.097 23:02:39 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.097 23:02:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.097 spdk_app_start Round 1 00:06:17.097 23:02:39 -- event/event.sh@25 -- # waitforlisten 2619378 /var/tmp/spdk-nbd.sock 00:06:17.097 23:02:39 -- common/autotest_common.sh@819 -- # '[' -z 2619378 ']' 00:06:17.097 23:02:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.097 23:02:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.097 23:02:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.097 23:02:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.097 23:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 23:02:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.097 23:02:39 -- common/autotest_common.sh@852 -- # return 0 00:06:17.097 23:02:39 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.097 Malloc0 00:06:17.097 23:02:39 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.097 Malloc1 00:06:17.097 23:02:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.097 23:02:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.358 /dev/nbd0 00:06:17.358 23:02:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.358 23:02:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.358 23:02:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:17.358 23:02:39 -- common/autotest_common.sh@857 -- # local i 00:06:17.358 23:02:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.358 23:02:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.358 23:02:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:17.358 23:02:39 -- common/autotest_common.sh@861 -- # break 00:06:17.358 23:02:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.358 23:02:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.358 23:02:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.358 1+0 records in 00:06:17.358 1+0 records out 00:06:17.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241921 s, 16.9 MB/s 00:06:17.358 23:02:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.358 23:02:39 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.358 23:02:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.358 23:02:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.358 23:02:39 -- common/autotest_common.sh@877 -- # return 0 00:06:17.358 23:02:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.358 23:02:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.358 23:02:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.358 /dev/nbd1 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.620 23:02:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:17.620 23:02:40 -- common/autotest_common.sh@857 -- # local i 00:06:17.620 23:02:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.620 23:02:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.620 23:02:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:17.620 23:02:40 -- common/autotest_common.sh@861 -- # break 00:06:17.620 23:02:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.620 23:02:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.620 23:02:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.620 1+0 records in 00:06:17.620 1+0 records out 00:06:17.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278099 s, 14.7 MB/s 00:06:17.620 23:02:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.620 23:02:40 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.620 23:02:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.620 23:02:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.620 23:02:40 -- common/autotest_common.sh@877 -- # return 0 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd0", 00:06:17.620 "bdev_name": "Malloc0" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd1", 00:06:17.620 "bdev_name": "Malloc1" 00:06:17.620 } 00:06:17.620 ]' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd0", 00:06:17.620 "bdev_name": "Malloc0" 00:06:17.620 }, 00:06:17.620 { 00:06:17.620 "nbd_device": "/dev/nbd1", 00:06:17.620 "bdev_name": "Malloc1" 00:06:17.620 } 00:06:17.620 ]' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.620 /dev/nbd1' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.620 /dev/nbd1' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.620 256+0 records in 00:06:17.620 256+0 records out 00:06:17.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124907 s, 83.9 MB/s 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.620 23:02:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.882 256+0 records in 00:06:17.882 256+0 records out 00:06:17.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163146 s, 64.3 MB/s 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.882 256+0 records in 00:06:17.882 256+0 records out 00:06:17.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017986 s, 58.3 MB/s 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@41 -- # break 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.882 23:02:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.143 23:02:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.143 23:02:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@41 -- # break 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.144 23:02:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@65 -- # true 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.403 23:02:40 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.403 23:02:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.403 23:02:41 -- event/event.sh@35 -- # sleep 3 00:06:18.664 [2024-06-07 23:02:41.148657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.664 [2024-06-07 23:02:41.176607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.664 [2024-06-07 23:02:41.176607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.664 [2024-06-07 23:02:41.208208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.664 [2024-06-07 23:02:41.208248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.971 23:02:44 -- event/event.sh@23 -- # for i in {0..2} 00:06:21.971 23:02:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.971 spdk_app_start Round 2 00:06:21.971 23:02:44 -- event/event.sh@25 -- # waitforlisten 2619378 /var/tmp/spdk-nbd.sock 00:06:21.971 23:02:44 -- common/autotest_common.sh@819 -- # '[' -z 2619378 ']' 00:06:21.971 23:02:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.971 23:02:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.971 23:02:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.971 23:02:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.971 23:02:44 -- common/autotest_common.sh@10 -- # set +x 00:06:21.971 23:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.971 23:02:44 -- common/autotest_common.sh@852 -- # return 0 00:06:21.971 23:02:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.971 Malloc0 00:06:21.971 23:02:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.971 Malloc1 00:06:21.971 23:02:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.971 23:02:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.972 /dev/nbd0 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.233 23:02:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:22.233 23:02:44 -- common/autotest_common.sh@857 -- # local i 00:06:22.233 23:02:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:22.233 23:02:44 -- common/autotest_common.sh@861 -- # break 00:06:22.233 23:02:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.233 1+0 records in 00:06:22.233 1+0 records out 00:06:22.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269071 s, 15.2 MB/s 00:06:22.233 23:02:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.233 23:02:44 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.233 23:02:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.233 23:02:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.233 23:02:44 -- common/autotest_common.sh@877 -- # return 0 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.233 /dev/nbd1 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.233 23:02:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:22.233 23:02:44 -- common/autotest_common.sh@857 -- # local i 00:06:22.233 23:02:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:22.233 23:02:44 -- common/autotest_common.sh@861 -- # break 00:06:22.233 23:02:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.233 23:02:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.233 1+0 records in 00:06:22.233 1+0 records out 00:06:22.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264577 s, 15.5 MB/s 00:06:22.233 23:02:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.233 23:02:44 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.233 23:02:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.233 23:02:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.233 23:02:44 -- common/autotest_common.sh@877 -- # return 0 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.233 23:02:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.496 { 00:06:22.496 "nbd_device": "/dev/nbd0", 00:06:22.496 "bdev_name": "Malloc0" 00:06:22.496 }, 00:06:22.496 { 00:06:22.496 "nbd_device": "/dev/nbd1", 00:06:22.496 "bdev_name": "Malloc1" 00:06:22.496 } 00:06:22.496 ]' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.496 { 00:06:22.496 "nbd_device": "/dev/nbd0", 00:06:22.496 "bdev_name": "Malloc0" 00:06:22.496 }, 00:06:22.496 { 00:06:22.496 "nbd_device": "/dev/nbd1", 00:06:22.496 "bdev_name": "Malloc1" 00:06:22.496 } 00:06:22.496 ]' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.496 /dev/nbd1' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.496 /dev/nbd1' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.496 23:02:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.497 256+0 records in 00:06:22.497 256+0 records out 00:06:22.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124423 s, 84.3 MB/s 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.497 256+0 records in 00:06:22.497 256+0 records out 00:06:22.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161472 s, 64.9 MB/s 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.497 256+0 records in 00:06:22.497 256+0 records out 00:06:22.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171701 s, 61.1 MB/s 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.497 23:02:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@41 -- # break 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.758 23:02:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@41 -- # break 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@65 -- # true 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.019 23:02:45 -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.019 23:02:45 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.280 23:02:45 -- event/event.sh@35 -- # sleep 3 00:06:23.280 [2024-06-07 23:02:45.958010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.540 [2024-06-07 23:02:45.985545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.541 [2024-06-07 23:02:45.985548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.541 [2024-06-07 23:02:46.016960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.541 [2024-06-07 23:02:46.016995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.844 23:02:48 -- event/event.sh@38 -- # waitforlisten 2619378 /var/tmp/spdk-nbd.sock 00:06:26.844 23:02:48 -- common/autotest_common.sh@819 -- # '[' -z 2619378 ']' 00:06:26.844 23:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.844 23:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.844 23:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.844 23:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.844 23:02:48 -- common/autotest_common.sh@10 -- # set +x 00:06:26.844 23:02:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.844 23:02:48 -- common/autotest_common.sh@852 -- # return 0 00:06:26.844 23:02:48 -- event/event.sh@39 -- # killprocess 2619378 00:06:26.844 23:02:48 -- common/autotest_common.sh@926 -- # '[' -z 2619378 ']' 00:06:26.844 23:02:48 -- common/autotest_common.sh@930 -- # kill -0 2619378 00:06:26.844 23:02:48 -- common/autotest_common.sh@931 -- # uname 00:06:26.844 23:02:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.844 23:02:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2619378 00:06:26.844 23:02:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.844 23:02:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.844 23:02:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2619378' 00:06:26.844 killing process with pid 2619378 00:06:26.844 23:02:49 -- common/autotest_common.sh@945 -- # kill 2619378 00:06:26.844 23:02:49 -- common/autotest_common.sh@950 -- # wait 2619378 00:06:26.844 spdk_app_start is called in Round 0. 00:06:26.844 Shutdown signal received, stop current app iteration 00:06:26.844 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:26.844 spdk_app_start is called in Round 1. 00:06:26.844 Shutdown signal received, stop current app iteration 00:06:26.844 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:26.844 spdk_app_start is called in Round 2. 00:06:26.844 Shutdown signal received, stop current app iteration 00:06:26.844 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:06:26.844 spdk_app_start is called in Round 3. 00:06:26.844 Shutdown signal received, stop current app iteration 00:06:26.844 23:02:49 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.844 23:02:49 -- event/event.sh@42 -- # return 0 00:06:26.844 00:06:26.844 real 0m15.400s 00:06:26.844 user 0m33.338s 00:06:26.844 sys 0m2.114s 00:06:26.844 23:02:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.844 23:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.844 ************************************ 00:06:26.844 END TEST app_repeat 00:06:26.844 ************************************ 00:06:26.844 23:02:49 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.844 23:02:49 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.844 23:02:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.844 23:02:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.844 23:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.844 ************************************ 00:06:26.844 START TEST cpu_locks 00:06:26.844 ************************************ 00:06:26.844 23:02:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:26.844 * Looking for test storage... 00:06:26.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.844 23:02:49 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.844 23:02:49 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.844 23:02:49 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.844 23:02:49 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.844 23:02:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.844 23:02:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.844 23:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.844 ************************************ 00:06:26.844 START TEST default_locks 00:06:26.844 ************************************ 00:06:26.844 23:02:49 -- common/autotest_common.sh@1104 -- # default_locks 00:06:26.844 23:02:49 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2622663 00:06:26.844 23:02:49 -- event/cpu_locks.sh@47 -- # waitforlisten 2622663 00:06:26.844 23:02:49 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.844 23:02:49 -- common/autotest_common.sh@819 -- # '[' -z 2622663 ']' 00:06:26.844 23:02:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.844 23:02:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.844 23:02:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.844 23:02:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.844 23:02:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.844 [2024-06-07 23:02:49.349794] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:26.844 [2024-06-07 23:02:49.349852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2622663 ] 00:06:26.844 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.844 [2024-06-07 23:02:49.410743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.844 [2024-06-07 23:02:49.442460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.844 [2024-06-07 23:02:49.442602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.417 23:02:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.417 23:02:50 -- common/autotest_common.sh@852 -- # return 0 00:06:27.417 23:02:50 -- event/cpu_locks.sh@49 -- # locks_exist 2622663 00:06:27.417 23:02:50 -- event/cpu_locks.sh@22 -- # lslocks -p 2622663 00:06:27.417 23:02:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.988 lslocks: write error 00:06:27.988 23:02:50 -- event/cpu_locks.sh@50 -- # killprocess 2622663 00:06:27.988 23:02:50 -- common/autotest_common.sh@926 -- # '[' -z 2622663 ']' 00:06:27.988 23:02:50 -- common/autotest_common.sh@930 -- # kill -0 2622663 00:06:27.988 23:02:50 -- common/autotest_common.sh@931 -- # uname 00:06:27.988 23:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.988 23:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2622663 00:06:27.988 23:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.988 23:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.988 23:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2622663' 00:06:27.988 killing process with pid 2622663 00:06:27.988 23:02:50 -- common/autotest_common.sh@945 -- # kill 2622663 00:06:27.988 23:02:50 -- common/autotest_common.sh@950 -- # wait 2622663 00:06:28.250 23:02:50 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2622663 00:06:28.250 23:02:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.250 23:02:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2622663 00:06:28.250 23:02:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:28.250 23:02:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.250 23:02:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:28.250 23:02:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.250 23:02:50 -- common/autotest_common.sh@643 -- # waitforlisten 2622663 00:06:28.250 23:02:50 -- common/autotest_common.sh@819 -- # '[' -z 2622663 ']' 00:06:28.250 23:02:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.250 23:02:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.250 23:02:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.250 23:02:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.250 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:28.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2622663) - No such process 00:06:28.250 ERROR: process (pid: 2622663) is no longer running 00:06:28.250 23:02:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.250 23:02:50 -- common/autotest_common.sh@852 -- # return 1 00:06:28.250 23:02:50 -- common/autotest_common.sh@643 -- # es=1 00:06:28.250 23:02:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:28.250 23:02:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:28.250 23:02:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:28.250 23:02:50 -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.250 23:02:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.250 23:02:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.250 23:02:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.250 00:06:28.250 real 0m1.469s 00:06:28.250 user 0m1.540s 00:06:28.250 sys 0m0.486s 00:06:28.250 23:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.250 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:28.250 ************************************ 00:06:28.250 END TEST default_locks 00:06:28.250 ************************************ 00:06:28.250 23:02:50 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.250 23:02:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.250 23:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.250 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:28.250 ************************************ 00:06:28.250 START TEST default_locks_via_rpc 00:06:28.250 ************************************ 00:06:28.250 23:02:50 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:28.250 23:02:50 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2623027 00:06:28.250 23:02:50 -- event/cpu_locks.sh@63 -- # waitforlisten 2623027 00:06:28.250 23:02:50 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.250 23:02:50 -- common/autotest_common.sh@819 -- # '[' -z 2623027 ']' 00:06:28.250 23:02:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.250 23:02:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.250 23:02:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.250 23:02:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.250 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:28.250 [2024-06-07 23:02:50.864544] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:28.250 [2024-06-07 23:02:50.864601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623027 ] 00:06:28.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.250 [2024-06-07 23:02:50.924967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.511 [2024-06-07 23:02:50.953657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.511 [2024-06-07 23:02:50.953786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.083 23:02:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.083 23:02:51 -- common/autotest_common.sh@852 -- # return 0 00:06:29.083 23:02:51 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.083 23:02:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.083 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:29.083 23:02:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.083 23:02:51 -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.083 23:02:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.083 23:02:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.083 23:02:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.083 23:02:51 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.083 23:02:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.083 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:06:29.083 23:02:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.083 23:02:51 -- event/cpu_locks.sh@71 -- # locks_exist 2623027 00:06:29.083 23:02:51 -- event/cpu_locks.sh@22 -- # lslocks -p 2623027 00:06:29.083 23:02:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.655 23:02:52 -- event/cpu_locks.sh@73 -- # killprocess 2623027 00:06:29.655 23:02:52 -- common/autotest_common.sh@926 -- # '[' -z 2623027 ']' 00:06:29.655 23:02:52 -- common/autotest_common.sh@930 -- # kill -0 2623027 00:06:29.655 23:02:52 -- common/autotest_common.sh@931 -- # uname 00:06:29.655 23:02:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.655 23:02:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2623027 00:06:29.655 23:02:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.655 23:02:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.655 23:02:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2623027' 00:06:29.655 killing process with pid 2623027 00:06:29.655 23:02:52 -- common/autotest_common.sh@945 -- # kill 2623027 00:06:29.655 23:02:52 -- common/autotest_common.sh@950 -- # wait 2623027 00:06:29.916 00:06:29.916 real 0m1.610s 00:06:29.916 user 0m1.714s 00:06:29.916 sys 0m0.528s 00:06:29.916 23:02:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.916 23:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 END TEST default_locks_via_rpc 00:06:29.916 ************************************ 00:06:29.916 23:02:52 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.916 23:02:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.916 23:02:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.916 23:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 START TEST non_locking_app_on_locked_coremask 00:06:29.916 ************************************ 00:06:29.916 23:02:52 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:29.916 23:02:52 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2623399 00:06:29.916 23:02:52 -- event/cpu_locks.sh@81 -- # waitforlisten 2623399 /var/tmp/spdk.sock 00:06:29.916 23:02:52 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.916 23:02:52 -- common/autotest_common.sh@819 -- # '[' -z 2623399 ']' 00:06:29.916 23:02:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.916 23:02:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.916 23:02:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.916 23:02:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.916 23:02:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 [2024-06-07 23:02:52.516085] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.916 [2024-06-07 23:02:52.516147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623399 ] 00:06:29.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.916 [2024-06-07 23:02:52.577136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.177 [2024-06-07 23:02:52.610064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.177 [2024-06-07 23:02:52.610202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.747 23:02:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.747 23:02:53 -- common/autotest_common.sh@852 -- # return 0 00:06:30.747 23:02:53 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.747 23:02:53 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2623632 00:06:30.747 23:02:53 -- event/cpu_locks.sh@85 -- # waitforlisten 2623632 /var/tmp/spdk2.sock 00:06:30.747 23:02:53 -- common/autotest_common.sh@819 -- # '[' -z 2623632 ']' 00:06:30.747 23:02:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.747 23:02:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.747 23:02:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.747 23:02:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.747 23:02:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.747 [2024-06-07 23:02:53.299713] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.747 [2024-06-07 23:02:53.299760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623632 ] 00:06:30.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.747 [2024-06-07 23:02:53.391020] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.747 [2024-06-07 23:02:53.391049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.008 [2024-06-07 23:02:53.448257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.008 [2024-06-07 23:02:53.448386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.580 23:02:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.580 23:02:54 -- common/autotest_common.sh@852 -- # return 0 00:06:31.580 23:02:54 -- event/cpu_locks.sh@87 -- # locks_exist 2623399 00:06:31.580 23:02:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.580 23:02:54 -- event/cpu_locks.sh@22 -- # lslocks -p 2623399 00:06:32.152 lslocks: write error 00:06:32.152 23:02:54 -- event/cpu_locks.sh@89 -- # killprocess 2623399 00:06:32.152 23:02:54 -- common/autotest_common.sh@926 -- # '[' -z 2623399 ']' 00:06:32.152 23:02:54 -- common/autotest_common.sh@930 -- # kill -0 2623399 00:06:32.152 23:02:54 -- common/autotest_common.sh@931 -- # uname 00:06:32.152 23:02:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.152 23:02:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2623399 00:06:32.152 23:02:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.152 23:02:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.152 23:02:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2623399' 00:06:32.152 killing process with pid 2623399 00:06:32.152 23:02:54 -- common/autotest_common.sh@945 -- # kill 2623399 00:06:32.152 23:02:54 -- common/autotest_common.sh@950 -- # wait 2623399 00:06:32.414 23:02:54 -- event/cpu_locks.sh@90 -- # killprocess 2623632 00:06:32.414 23:02:54 -- common/autotest_common.sh@926 -- # '[' -z 2623632 ']' 00:06:32.414 23:02:54 -- common/autotest_common.sh@930 -- # kill -0 2623632 00:06:32.414 23:02:54 -- common/autotest_common.sh@931 -- # uname 00:06:32.414 23:02:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.414 23:02:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2623632 00:06:32.414 23:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.414 23:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.414 23:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2623632' 00:06:32.414 killing process with pid 2623632 00:06:32.414 23:02:55 -- common/autotest_common.sh@945 -- # kill 2623632 00:06:32.414 23:02:55 -- common/autotest_common.sh@950 -- # wait 2623632 00:06:32.676 00:06:32.676 real 0m2.767s 00:06:32.676 user 0m2.993s 00:06:32.676 sys 0m0.838s 00:06:32.676 23:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.676 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:32.676 ************************************ 00:06:32.676 END TEST non_locking_app_on_locked_coremask 00:06:32.676 ************************************ 00:06:32.676 23:02:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.676 23:02:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.676 23:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.676 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:32.676 ************************************ 00:06:32.676 START TEST locking_app_on_unlocked_coremask 00:06:32.676 ************************************ 00:06:32.676 23:02:55 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:32.676 23:02:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2624110 00:06:32.676 23:02:55 -- event/cpu_locks.sh@99 -- # waitforlisten 2624110 /var/tmp/spdk.sock 00:06:32.676 23:02:55 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.676 23:02:55 -- common/autotest_common.sh@819 -- # '[' -z 2624110 ']' 00:06:32.676 23:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.676 23:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.676 23:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.676 23:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.676 23:02:55 -- common/autotest_common.sh@10 -- # set +x 00:06:32.676 [2024-06-07 23:02:55.328550] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.676 [2024-06-07 23:02:55.328606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624110 ] 00:06:32.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.936 [2024-06-07 23:02:55.388360] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.936 [2024-06-07 23:02:55.388393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.936 [2024-06-07 23:02:55.416145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.936 [2024-06-07 23:02:55.416289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.508 23:02:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.508 23:02:56 -- common/autotest_common.sh@852 -- # return 0 00:06:33.508 23:02:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2624120 00:06:33.508 23:02:56 -- event/cpu_locks.sh@103 -- # waitforlisten 2624120 /var/tmp/spdk2.sock 00:06:33.508 23:02:56 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.508 23:02:56 -- common/autotest_common.sh@819 -- # '[' -z 2624120 ']' 00:06:33.508 23:02:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.508 23:02:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.508 23:02:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.508 23:02:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.508 23:02:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.508 [2024-06-07 23:02:56.153473] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:33.508 [2024-06-07 23:02:56.153538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624120 ] 00:06:33.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.769 [2024-06-07 23:02:56.245842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.769 [2024-06-07 23:02:56.307023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.769 [2024-06-07 23:02:56.307165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.395 23:02:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.395 23:02:56 -- common/autotest_common.sh@852 -- # return 0 00:06:34.395 23:02:56 -- event/cpu_locks.sh@105 -- # locks_exist 2624120 00:06:34.395 23:02:56 -- event/cpu_locks.sh@22 -- # lslocks -p 2624120 00:06:34.395 23:02:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.036 lslocks: write error 00:06:35.036 23:02:57 -- event/cpu_locks.sh@107 -- # killprocess 2624110 00:06:35.036 23:02:57 -- common/autotest_common.sh@926 -- # '[' -z 2624110 ']' 00:06:35.036 23:02:57 -- common/autotest_common.sh@930 -- # kill -0 2624110 00:06:35.036 23:02:57 -- common/autotest_common.sh@931 -- # uname 00:06:35.036 23:02:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.036 23:02:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2624110 00:06:35.036 23:02:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.036 23:02:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.036 23:02:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2624110' 00:06:35.036 killing process with pid 2624110 00:06:35.036 23:02:57 -- common/autotest_common.sh@945 -- # kill 2624110 00:06:35.036 23:02:57 -- common/autotest_common.sh@950 -- # wait 2624110 00:06:35.297 23:02:57 -- event/cpu_locks.sh@108 -- # killprocess 2624120 00:06:35.297 23:02:57 -- common/autotest_common.sh@926 -- # '[' -z 2624120 ']' 00:06:35.297 23:02:57 -- common/autotest_common.sh@930 -- # kill -0 2624120 00:06:35.297 23:02:57 -- common/autotest_common.sh@931 -- # uname 00:06:35.297 23:02:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.297 23:02:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2624120 00:06:35.297 23:02:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.297 23:02:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.297 23:02:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2624120' 00:06:35.297 killing process with pid 2624120 00:06:35.297 23:02:57 -- common/autotest_common.sh@945 -- # kill 2624120 00:06:35.297 23:02:57 -- common/autotest_common.sh@950 -- # wait 2624120 00:06:35.558 00:06:35.558 real 0m2.830s 00:06:35.558 user 0m3.084s 00:06:35.558 sys 0m0.859s 00:06:35.558 23:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.558 23:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 ************************************ 00:06:35.558 END TEST locking_app_on_unlocked_coremask 00:06:35.558 ************************************ 00:06:35.558 23:02:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.558 23:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.558 23:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.558 23:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 ************************************ 00:06:35.558 START TEST locking_app_on_locked_coremask 00:06:35.558 ************************************ 00:06:35.558 23:02:58 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:35.558 23:02:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2624647 00:06:35.558 23:02:58 -- event/cpu_locks.sh@116 -- # waitforlisten 2624647 /var/tmp/spdk.sock 00:06:35.558 23:02:58 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.558 23:02:58 -- common/autotest_common.sh@819 -- # '[' -z 2624647 ']' 00:06:35.558 23:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.558 23:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.558 23:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.558 23:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.558 23:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.558 [2024-06-07 23:02:58.205163] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:35.558 [2024-06-07 23:02:58.205229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624647 ] 00:06:35.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.818 [2024-06-07 23:02:58.267710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.818 [2024-06-07 23:02:58.301685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.818 [2024-06-07 23:02:58.301819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.389 23:02:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.389 23:02:58 -- common/autotest_common.sh@852 -- # return 0 00:06:36.389 23:02:58 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.389 23:02:58 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2624837 00:06:36.389 23:02:58 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2624837 /var/tmp/spdk2.sock 00:06:36.389 23:02:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.389 23:02:58 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2624837 /var/tmp/spdk2.sock 00:06:36.389 23:02:58 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.389 23:02:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.389 23:02:58 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.389 23:02:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.389 23:02:58 -- common/autotest_common.sh@643 -- # waitforlisten 2624837 /var/tmp/spdk2.sock 00:06:36.389 23:02:58 -- common/autotest_common.sh@819 -- # '[' -z 2624837 ']' 00:06:36.389 23:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.389 23:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.389 23:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.389 23:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.389 23:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:36.389 [2024-06-07 23:02:58.984834] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.389 [2024-06-07 23:02:58.984883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2624837 ] 00:06:36.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.649 [2024-06-07 23:02:59.079178] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2624647 has claimed it. 00:06:36.649 [2024-06-07 23:02:59.079220] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2624837) - No such process 00:06:37.219 ERROR: process (pid: 2624837) is no longer running 00:06:37.219 23:02:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.219 23:02:59 -- common/autotest_common.sh@852 -- # return 1 00:06:37.219 23:02:59 -- common/autotest_common.sh@643 -- # es=1 00:06:37.219 23:02:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.219 23:02:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.219 23:02:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.219 23:02:59 -- event/cpu_locks.sh@122 -- # locks_exist 2624647 00:06:37.219 23:02:59 -- event/cpu_locks.sh@22 -- # lslocks -p 2624647 00:06:37.219 23:02:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.481 lslocks: write error 00:06:37.481 23:03:00 -- event/cpu_locks.sh@124 -- # killprocess 2624647 00:06:37.481 23:03:00 -- common/autotest_common.sh@926 -- # '[' -z 2624647 ']' 00:06:37.482 23:03:00 -- common/autotest_common.sh@930 -- # kill -0 2624647 00:06:37.482 23:03:00 -- common/autotest_common.sh@931 -- # uname 00:06:37.482 23:03:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.482 23:03:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2624647 00:06:37.482 23:03:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.482 23:03:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.482 23:03:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2624647' 00:06:37.482 killing process with pid 2624647 00:06:37.482 23:03:00 -- common/autotest_common.sh@945 -- # kill 2624647 00:06:37.482 23:03:00 -- common/autotest_common.sh@950 -- # wait 2624647 00:06:37.743 00:06:37.743 real 0m2.132s 00:06:37.743 user 0m2.335s 00:06:37.743 sys 0m0.560s 00:06:37.743 23:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.743 23:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.743 ************************************ 00:06:37.743 END TEST locking_app_on_locked_coremask 00:06:37.743 ************************************ 00:06:37.743 23:03:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.743 23:03:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.743 23:03:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.743 23:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.743 ************************************ 00:06:37.743 START TEST locking_overlapped_coremask 00:06:37.743 ************************************ 00:06:37.743 23:03:00 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:37.743 23:03:00 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2625225 00:06:37.743 23:03:00 -- event/cpu_locks.sh@133 -- # waitforlisten 2625225 /var/tmp/spdk.sock 00:06:37.743 23:03:00 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.744 23:03:00 -- common/autotest_common.sh@819 -- # '[' -z 2625225 ']' 00:06:37.744 23:03:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.744 23:03:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.744 23:03:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.744 23:03:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.744 23:03:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.744 [2024-06-07 23:03:00.381328] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.744 [2024-06-07 23:03:00.381388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625225 ] 00:06:37.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.005 [2024-06-07 23:03:00.442896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.005 [2024-06-07 23:03:00.475721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.005 [2024-06-07 23:03:00.475981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.005 [2024-06-07 23:03:00.476092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.005 [2024-06-07 23:03:00.476094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.577 23:03:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.577 23:03:01 -- common/autotest_common.sh@852 -- # return 0 00:06:38.577 23:03:01 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.577 23:03:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2625273 00:06:38.577 23:03:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2625273 /var/tmp/spdk2.sock 00:06:38.577 23:03:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.577 23:03:01 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2625273 /var/tmp/spdk2.sock 00:06:38.577 23:03:01 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:38.577 23:03:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.577 23:03:01 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:38.577 23:03:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.577 23:03:01 -- common/autotest_common.sh@643 -- # waitforlisten 2625273 /var/tmp/spdk2.sock 00:06:38.577 23:03:01 -- common/autotest_common.sh@819 -- # '[' -z 2625273 ']' 00:06:38.577 23:03:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.577 23:03:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.577 23:03:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.577 23:03:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.577 23:03:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.577 [2024-06-07 23:03:01.186627] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.577 [2024-06-07 23:03:01.186680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625273 ] 00:06:38.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.838 [2024-06-07 23:03:01.261220] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2625225 has claimed it. 00:06:38.838 [2024-06-07 23:03:01.261256] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2625273) - No such process 00:06:39.409 ERROR: process (pid: 2625273) is no longer running 00:06:39.409 23:03:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.409 23:03:01 -- common/autotest_common.sh@852 -- # return 1 00:06:39.409 23:03:01 -- common/autotest_common.sh@643 -- # es=1 00:06:39.409 23:03:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.409 23:03:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.409 23:03:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.409 23:03:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.409 23:03:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.409 23:03:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.409 23:03:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.409 23:03:01 -- event/cpu_locks.sh@141 -- # killprocess 2625225 00:06:39.409 23:03:01 -- common/autotest_common.sh@926 -- # '[' -z 2625225 ']' 00:06:39.409 23:03:01 -- common/autotest_common.sh@930 -- # kill -0 2625225 00:06:39.409 23:03:01 -- common/autotest_common.sh@931 -- # uname 00:06:39.409 23:03:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.409 23:03:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2625225 00:06:39.409 23:03:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.409 23:03:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.409 23:03:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2625225' 00:06:39.409 killing process with pid 2625225 00:06:39.409 23:03:01 -- common/autotest_common.sh@945 -- # kill 2625225 00:06:39.409 23:03:01 -- common/autotest_common.sh@950 -- # wait 2625225 00:06:39.409 00:06:39.409 real 0m1.723s 00:06:39.409 user 0m4.978s 00:06:39.409 sys 0m0.342s 00:06:39.409 23:03:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.409 23:03:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.409 ************************************ 00:06:39.409 END TEST locking_overlapped_coremask 00:06:39.409 ************************************ 00:06:39.409 23:03:02 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.409 23:03:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.409 23:03:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.409 23:03:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 ************************************ 00:06:39.670 START TEST locking_overlapped_coremask_via_rpc 00:06:39.670 ************************************ 00:06:39.670 23:03:02 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:39.670 23:03:02 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2625683 00:06:39.670 23:03:02 -- event/cpu_locks.sh@149 -- # waitforlisten 2625683 /var/tmp/spdk.sock 00:06:39.670 23:03:02 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.670 23:03:02 -- common/autotest_common.sh@819 -- # '[' -z 2625683 ']' 00:06:39.670 23:03:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.670 23:03:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.670 23:03:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.670 23:03:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.670 23:03:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 [2024-06-07 23:03:02.145943] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.670 [2024-06-07 23:03:02.146001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625683 ] 00:06:39.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.670 [2024-06-07 23:03:02.206220] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.670 [2024-06-07 23:03:02.206259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.670 [2024-06-07 23:03:02.235718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.670 [2024-06-07 23:03:02.235987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.670 [2024-06-07 23:03:02.236102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.670 [2024-06-07 23:03:02.236105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.241 23:03:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.241 23:03:02 -- common/autotest_common.sh@852 -- # return 0 00:06:40.241 23:03:02 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.241 23:03:02 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2625714 00:06:40.241 23:03:02 -- event/cpu_locks.sh@153 -- # waitforlisten 2625714 /var/tmp/spdk2.sock 00:06:40.241 23:03:02 -- common/autotest_common.sh@819 -- # '[' -z 2625714 ']' 00:06:40.241 23:03:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.241 23:03:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.241 23:03:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.241 23:03:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.241 23:03:02 -- common/autotest_common.sh@10 -- # set +x 00:06:40.505 [2024-06-07 23:03:02.954219] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.505 [2024-06-07 23:03:02.954284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2625714 ] 00:06:40.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.505 [2024-06-07 23:03:03.028424] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.505 [2024-06-07 23:03:03.028448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.505 [2024-06-07 23:03:03.084099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.505 [2024-06-07 23:03:03.084315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.505 [2024-06-07 23:03:03.084367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.505 [2024-06-07 23:03:03.084370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.075 23:03:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.075 23:03:03 -- common/autotest_common.sh@852 -- # return 0 00:06:41.075 23:03:03 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.075 23:03:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.075 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.075 23:03:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.075 23:03:03 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.075 23:03:03 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.075 23:03:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.075 23:03:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:41.075 23:03:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.075 23:03:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:41.075 23:03:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.075 23:03:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.075 23:03:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.075 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.075 [2024-06-07 23:03:03.733297] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2625683 has claimed it. 00:06:41.075 request: 00:06:41.075 { 00:06:41.075 "method": "framework_enable_cpumask_locks", 00:06:41.075 "req_id": 1 00:06:41.075 } 00:06:41.075 Got JSON-RPC error response 00:06:41.075 response: 00:06:41.075 { 00:06:41.075 "code": -32603, 00:06:41.075 "message": "Failed to claim CPU core: 2" 00:06:41.075 } 00:06:41.075 23:03:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:41.075 23:03:03 -- common/autotest_common.sh@643 -- # es=1 00:06:41.075 23:03:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.075 23:03:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.075 23:03:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.075 23:03:03 -- event/cpu_locks.sh@158 -- # waitforlisten 2625683 /var/tmp/spdk.sock 00:06:41.075 23:03:03 -- common/autotest_common.sh@819 -- # '[' -z 2625683 ']' 00:06:41.075 23:03:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.075 23:03:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.075 23:03:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.075 23:03:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.075 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.335 23:03:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.335 23:03:03 -- common/autotest_common.sh@852 -- # return 0 00:06:41.335 23:03:03 -- event/cpu_locks.sh@159 -- # waitforlisten 2625714 /var/tmp/spdk2.sock 00:06:41.335 23:03:03 -- common/autotest_common.sh@819 -- # '[' -z 2625714 ']' 00:06:41.335 23:03:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.335 23:03:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.335 23:03:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.335 23:03:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.335 23:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:41.595 23:03:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.595 23:03:04 -- common/autotest_common.sh@852 -- # return 0 00:06:41.595 23:03:04 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.595 23:03:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.595 23:03:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.595 23:03:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.595 00:06:41.595 real 0m1.971s 00:06:41.595 user 0m0.749s 00:06:41.595 sys 0m0.144s 00:06:41.595 23:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.595 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.595 ************************************ 00:06:41.595 END TEST locking_overlapped_coremask_via_rpc 00:06:41.595 ************************************ 00:06:41.595 23:03:04 -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.595 23:03:04 -- event/cpu_locks.sh@15 -- # [[ -z 2625683 ]] 00:06:41.595 23:03:04 -- event/cpu_locks.sh@15 -- # killprocess 2625683 00:06:41.595 23:03:04 -- common/autotest_common.sh@926 -- # '[' -z 2625683 ']' 00:06:41.595 23:03:04 -- common/autotest_common.sh@930 -- # kill -0 2625683 00:06:41.595 23:03:04 -- common/autotest_common.sh@931 -- # uname 00:06:41.595 23:03:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.595 23:03:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2625683 00:06:41.595 23:03:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.595 23:03:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.595 23:03:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2625683' 00:06:41.595 killing process with pid 2625683 00:06:41.595 23:03:04 -- common/autotest_common.sh@945 -- # kill 2625683 00:06:41.595 23:03:04 -- common/autotest_common.sh@950 -- # wait 2625683 00:06:41.855 23:03:04 -- event/cpu_locks.sh@16 -- # [[ -z 2625714 ]] 00:06:41.855 23:03:04 -- event/cpu_locks.sh@16 -- # killprocess 2625714 00:06:41.855 23:03:04 -- common/autotest_common.sh@926 -- # '[' -z 2625714 ']' 00:06:41.855 23:03:04 -- common/autotest_common.sh@930 -- # kill -0 2625714 00:06:41.855 23:03:04 -- common/autotest_common.sh@931 -- # uname 00:06:41.855 23:03:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.855 23:03:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2625714 00:06:41.855 23:03:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:41.855 23:03:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:41.855 23:03:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2625714' 00:06:41.855 killing process with pid 2625714 00:06:41.855 23:03:04 -- common/autotest_common.sh@945 -- # kill 2625714 00:06:41.855 23:03:04 -- common/autotest_common.sh@950 -- # wait 2625714 00:06:42.116 23:03:04 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.116 23:03:04 -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.116 23:03:04 -- event/cpu_locks.sh@15 -- # [[ -z 2625683 ]] 00:06:42.116 23:03:04 -- event/cpu_locks.sh@15 -- # killprocess 2625683 00:06:42.116 23:03:04 -- common/autotest_common.sh@926 -- # '[' -z 2625683 ']' 00:06:42.116 23:03:04 -- common/autotest_common.sh@930 -- # kill -0 2625683 00:06:42.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2625683) - No such process 00:06:42.116 23:03:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2625683 is not found' 00:06:42.116 Process with pid 2625683 is not found 00:06:42.116 23:03:04 -- event/cpu_locks.sh@16 -- # [[ -z 2625714 ]] 00:06:42.116 23:03:04 -- event/cpu_locks.sh@16 -- # killprocess 2625714 00:06:42.116 23:03:04 -- common/autotest_common.sh@926 -- # '[' -z 2625714 ']' 00:06:42.116 23:03:04 -- common/autotest_common.sh@930 -- # kill -0 2625714 00:06:42.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2625714) - No such process 00:06:42.116 23:03:04 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2625714 is not found' 00:06:42.116 Process with pid 2625714 is not found 00:06:42.117 23:03:04 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.117 00:06:42.117 real 0m15.409s 00:06:42.117 user 0m26.885s 00:06:42.117 sys 0m4.505s 00:06:42.117 23:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.117 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 ************************************ 00:06:42.117 END TEST cpu_locks 00:06:42.117 ************************************ 00:06:42.117 00:06:42.117 real 0m39.284s 00:06:42.117 user 1m15.642s 00:06:42.117 sys 0m7.444s 00:06:42.117 23:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.117 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 ************************************ 00:06:42.117 END TEST event 00:06:42.117 ************************************ 00:06:42.117 23:03:04 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.117 23:03:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.117 23:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.117 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 ************************************ 00:06:42.117 START TEST thread 00:06:42.117 ************************************ 00:06:42.117 23:03:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.117 * Looking for test storage... 00:06:42.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:42.117 23:03:04 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.117 23:03:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.117 23:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.117 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.117 ************************************ 00:06:42.117 START TEST thread_poller_perf 00:06:42.117 ************************************ 00:06:42.117 23:03:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.378 [2024-06-07 23:03:04.807506] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.378 [2024-06-07 23:03:04.807617] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626221 ] 00:06:42.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.378 [2024-06-07 23:03:04.874772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.378 [2024-06-07 23:03:04.911994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.378 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.319 ====================================== 00:06:43.319 busy:2411752120 (cyc) 00:06:43.319 total_run_count: 274000 00:06:43.319 tsc_hz: 2400000000 (cyc) 00:06:43.319 ====================================== 00:06:43.319 poller_cost: 8802 (cyc), 3667 (nsec) 00:06:43.319 00:06:43.319 real 0m1.172s 00:06:43.319 user 0m1.088s 00:06:43.319 sys 0m0.079s 00:06:43.319 23:03:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.319 23:03:05 -- common/autotest_common.sh@10 -- # set +x 00:06:43.319 ************************************ 00:06:43.319 END TEST thread_poller_perf 00:06:43.319 ************************************ 00:06:43.319 23:03:05 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.319 23:03:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:43.319 23:03:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.319 23:03:05 -- common/autotest_common.sh@10 -- # set +x 00:06:43.579 ************************************ 00:06:43.579 START TEST thread_poller_perf 00:06:43.579 ************************************ 00:06:43.579 23:03:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.579 [2024-06-07 23:03:06.022970] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:43.579 [2024-06-07 23:03:06.023069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626490 ] 00:06:43.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.579 [2024-06-07 23:03:06.087219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.579 [2024-06-07 23:03:06.117307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.579 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.521 ====================================== 00:06:44.521 busy:2402705238 (cyc) 00:06:44.521 total_run_count: 3795000 00:06:44.521 tsc_hz: 2400000000 (cyc) 00:06:44.521 ====================================== 00:06:44.521 poller_cost: 633 (cyc), 263 (nsec) 00:06:44.521 00:06:44.521 real 0m1.156s 00:06:44.521 user 0m1.088s 00:06:44.521 sys 0m0.063s 00:06:44.521 23:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.521 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.521 ************************************ 00:06:44.521 END TEST thread_poller_perf 00:06:44.521 ************************************ 00:06:44.521 23:03:07 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.521 00:06:44.521 real 0m2.512s 00:06:44.521 user 0m2.249s 00:06:44.521 sys 0m0.274s 00:06:44.521 23:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.521 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.521 ************************************ 00:06:44.521 END TEST thread 00:06:44.521 ************************************ 00:06:44.785 23:03:07 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:44.785 23:03:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.785 23:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.785 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.785 ************************************ 00:06:44.785 START TEST accel 00:06:44.785 ************************************ 00:06:44.785 23:03:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:44.785 * Looking for test storage... 00:06:44.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:44.785 23:03:07 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:44.785 23:03:07 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:44.785 23:03:07 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.785 23:03:07 -- accel/accel.sh@59 -- # spdk_tgt_pid=2626880 00:06:44.785 23:03:07 -- accel/accel.sh@60 -- # waitforlisten 2626880 00:06:44.785 23:03:07 -- common/autotest_common.sh@819 -- # '[' -z 2626880 ']' 00:06:44.785 23:03:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.785 23:03:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.785 23:03:07 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:44.785 23:03:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.785 23:03:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.785 23:03:07 -- accel/accel.sh@58 -- # build_accel_config 00:06:44.785 23:03:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.785 23:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.785 23:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.785 23:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.785 23:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.785 23:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.785 23:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.785 23:03:07 -- accel/accel.sh@42 -- # jq -r . 00:06:44.785 [2024-06-07 23:03:07.388810] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.785 [2024-06-07 23:03:07.388887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626880 ] 00:06:44.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.785 [2024-06-07 23:03:07.455919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.045 [2024-06-07 23:03:07.492556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.045 [2024-06-07 23:03:07.492732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.617 23:03:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.617 23:03:08 -- common/autotest_common.sh@852 -- # return 0 00:06:45.617 23:03:08 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:45.617 23:03:08 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:45.617 23:03:08 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:45.617 23:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.617 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.617 23:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # IFS== 00:06:45.617 23:03:08 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.617 23:03:08 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.617 23:03:08 -- accel/accel.sh@67 -- # killprocess 2626880 00:06:45.617 23:03:08 -- common/autotest_common.sh@926 -- # '[' -z 2626880 ']' 00:06:45.617 23:03:08 -- common/autotest_common.sh@930 -- # kill -0 2626880 00:06:45.617 23:03:08 -- common/autotest_common.sh@931 -- # uname 00:06:45.617 23:03:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.617 23:03:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2626880 00:06:45.617 23:03:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.617 23:03:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.617 23:03:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2626880' 00:06:45.617 killing process with pid 2626880 00:06:45.617 23:03:08 -- common/autotest_common.sh@945 -- # kill 2626880 00:06:45.617 23:03:08 -- common/autotest_common.sh@950 -- # wait 2626880 00:06:45.879 23:03:08 -- accel/accel.sh@68 -- # trap - ERR 00:06:45.879 23:03:08 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:45.879 23:03:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:45.879 23:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.879 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.879 23:03:08 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:45.879 23:03:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:45.879 23:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.879 23:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.879 23:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.879 23:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.879 23:03:08 -- accel/accel.sh@42 -- # jq -r . 00:06:45.879 23:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.879 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.879 23:03:08 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:45.879 23:03:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.879 23:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.879 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.879 ************************************ 00:06:45.879 START TEST accel_missing_filename 00:06:45.879 ************************************ 00:06:45.879 23:03:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:45.879 23:03:08 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.879 23:03:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:45.879 23:03:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.879 23:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.879 23:03:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.879 23:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.879 23:03:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:45.879 23:03:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:45.879 23:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.879 23:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.879 23:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.879 23:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.879 23:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.879 23:03:08 -- accel/accel.sh@42 -- # jq -r . 00:06:45.879 [2024-06-07 23:03:08.559369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.879 [2024-06-07 23:03:08.559415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627271 ] 00:06:46.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.140 [2024-06-07 23:03:08.610068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.140 [2024-06-07 23:03:08.638830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.140 [2024-06-07 23:03:08.670626] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.140 [2024-06-07 23:03:08.707852] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.140 A filename is required. 00:06:46.140 23:03:08 -- common/autotest_common.sh@643 -- # es=234 00:06:46.140 23:03:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.140 23:03:08 -- common/autotest_common.sh@652 -- # es=106 00:06:46.140 23:03:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.140 23:03:08 -- common/autotest_common.sh@660 -- # es=1 00:06:46.140 23:03:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.140 00:06:46.140 real 0m0.203s 00:06:46.140 user 0m0.161s 00:06:46.140 sys 0m0.082s 00:06:46.140 23:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.140 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:46.140 ************************************ 00:06:46.140 END TEST accel_missing_filename 00:06:46.140 ************************************ 00:06:46.140 23:03:08 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.140 23:03:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:46.140 23:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.140 23:03:08 -- common/autotest_common.sh@10 -- # set +x 00:06:46.140 ************************************ 00:06:46.140 START TEST accel_compress_verify 00:06:46.140 ************************************ 00:06:46.140 23:03:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.140 23:03:08 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.140 23:03:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.140 23:03:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.140 23:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.140 23:03:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.140 23:03:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.140 23:03:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.140 23:03:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:46.140 23:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.140 23:03:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.140 23:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.140 23:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.140 23:03:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.140 23:03:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.140 23:03:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.140 23:03:08 -- accel/accel.sh@42 -- # jq -r . 00:06:46.140 [2024-06-07 23:03:08.816182] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.140 [2024-06-07 23:03:08.816267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627411 ] 00:06:46.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.401 [2024-06-07 23:03:08.878194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.401 [2024-06-07 23:03:08.908204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.401 [2024-06-07 23:03:08.940247] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.401 [2024-06-07 23:03:08.977475] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.401 00:06:46.401 Compression does not support the verify option, aborting. 00:06:46.401 23:03:09 -- common/autotest_common.sh@643 -- # es=161 00:06:46.401 23:03:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.401 23:03:09 -- common/autotest_common.sh@652 -- # es=33 00:06:46.401 23:03:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.401 23:03:09 -- common/autotest_common.sh@660 -- # es=1 00:06:46.401 23:03:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.401 00:06:46.401 real 0m0.230s 00:06:46.401 user 0m0.164s 00:06:46.401 sys 0m0.106s 00:06:46.401 23:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.401 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.401 ************************************ 00:06:46.401 END TEST accel_compress_verify 00:06:46.401 ************************************ 00:06:46.401 23:03:09 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:46.401 23:03:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.401 23:03:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.401 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.401 ************************************ 00:06:46.401 START TEST accel_wrong_workload 00:06:46.401 ************************************ 00:06:46.401 23:03:09 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:46.401 23:03:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.401 23:03:09 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:46.401 23:03:09 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.401 23:03:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.401 23:03:09 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.401 23:03:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.401 23:03:09 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:46.401 23:03:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:46.401 23:03:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.401 23:03:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.401 23:03:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.401 23:03:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.401 23:03:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.401 23:03:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.401 23:03:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.401 23:03:09 -- accel/accel.sh@42 -- # jq -r . 00:06:46.663 Unsupported workload type: foobar 00:06:46.663 [2024-06-07 23:03:09.085976] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:46.663 accel_perf options: 00:06:46.663 [-h help message] 00:06:46.663 [-q queue depth per core] 00:06:46.663 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.663 [-T number of threads per core 00:06:46.663 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.663 [-t time in seconds] 00:06:46.663 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.663 [ dif_verify, , dif_generate, dif_generate_copy 00:06:46.663 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.663 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.663 [-S for crc32c workload, use this seed value (default 0) 00:06:46.663 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.663 [-f for fill workload, use this BYTE value (default 255) 00:06:46.663 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.663 [-y verify result if this switch is on] 00:06:46.663 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.663 Can be used to spread operations across a wider range of memory. 00:06:46.663 23:03:09 -- common/autotest_common.sh@643 -- # es=1 00:06:46.663 23:03:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.663 23:03:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.663 23:03:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.663 00:06:46.663 real 0m0.034s 00:06:46.663 user 0m0.022s 00:06:46.663 sys 0m0.012s 00:06:46.663 23:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.663 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.663 ************************************ 00:06:46.663 END TEST accel_wrong_workload 00:06:46.663 ************************************ 00:06:46.663 Error: writing output failed: Broken pipe 00:06:46.663 23:03:09 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.663 23:03:09 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:46.663 23:03:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.663 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.663 ************************************ 00:06:46.663 START TEST accel_negative_buffers 00:06:46.663 ************************************ 00:06:46.663 23:03:09 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:46.663 23:03:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.663 23:03:09 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:46.663 23:03:09 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.663 23:03:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.663 23:03:09 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.663 23:03:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.663 23:03:09 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:46.663 23:03:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:46.663 23:03:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.663 23:03:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.663 23:03:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.663 23:03:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.663 23:03:09 -- accel/accel.sh@42 -- # jq -r . 00:06:46.663 -x option must be non-negative. 00:06:46.663 [2024-06-07 23:03:09.159690] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:46.663 accel_perf options: 00:06:46.663 [-h help message] 00:06:46.663 [-q queue depth per core] 00:06:46.663 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:46.663 [-T number of threads per core 00:06:46.663 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:46.663 [-t time in seconds] 00:06:46.663 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:46.663 [ dif_verify, , dif_generate, dif_generate_copy 00:06:46.663 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:46.663 [-l for compress/decompress workloads, name of uncompressed input file 00:06:46.663 [-S for crc32c workload, use this seed value (default 0) 00:06:46.663 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:46.663 [-f for fill workload, use this BYTE value (default 255) 00:06:46.663 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:46.663 [-y verify result if this switch is on] 00:06:46.663 [-a tasks to allocate per core (default: same value as -q)] 00:06:46.663 Can be used to spread operations across a wider range of memory. 00:06:46.663 23:03:09 -- common/autotest_common.sh@643 -- # es=1 00:06:46.663 23:03:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.663 23:03:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.663 23:03:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.663 00:06:46.663 real 0m0.037s 00:06:46.663 user 0m0.023s 00:06:46.663 sys 0m0.013s 00:06:46.663 23:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.663 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.663 ************************************ 00:06:46.663 END TEST accel_negative_buffers 00:06:46.663 ************************************ 00:06:46.663 Error: writing output failed: Broken pipe 00:06:46.663 23:03:09 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:46.663 23:03:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:46.663 23:03:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.663 23:03:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.663 ************************************ 00:06:46.663 START TEST accel_crc32c 00:06:46.663 ************************************ 00:06:46.663 23:03:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:46.663 23:03:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.663 23:03:09 -- accel/accel.sh@17 -- # local accel_module 00:06:46.663 23:03:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.663 23:03:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.663 23:03:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.663 23:03:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.663 23:03:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.663 23:03:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.663 23:03:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.663 23:03:09 -- accel/accel.sh@42 -- # jq -r . 00:06:46.663 [2024-06-07 23:03:09.239786] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.663 [2024-06-07 23:03:09.239879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627549 ] 00:06:46.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.663 [2024-06-07 23:03:09.314681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.938 [2024-06-07 23:03:09.351099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.877 23:03:10 -- accel/accel.sh@18 -- # out=' 00:06:47.877 SPDK Configuration: 00:06:47.877 Core mask: 0x1 00:06:47.877 00:06:47.877 Accel Perf Configuration: 00:06:47.877 Workload Type: crc32c 00:06:47.877 CRC-32C seed: 32 00:06:47.877 Transfer size: 4096 bytes 00:06:47.877 Vector count 1 00:06:47.877 Module: software 00:06:47.877 Queue depth: 32 00:06:47.877 Allocate depth: 32 00:06:47.877 # threads/core: 1 00:06:47.877 Run time: 1 seconds 00:06:47.877 Verify: Yes 00:06:47.877 00:06:47.877 Running for 1 seconds... 00:06:47.877 00:06:47.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.877 ------------------------------------------------------------------------------------ 00:06:47.877 0,0 448032/s 1750 MiB/s 0 0 00:06:47.877 ==================================================================================== 00:06:47.877 Total 448032/s 1750 MiB/s 0 0' 00:06:47.877 23:03:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.877 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:47.877 23:03:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.877 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.877 23:03:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.877 23:03:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.877 23:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.877 23:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.877 23:03:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.877 23:03:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.877 23:03:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.877 23:03:10 -- accel/accel.sh@42 -- # jq -r . 00:06:47.877 [2024-06-07 23:03:10.475756] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:47.877 [2024-06-07 23:03:10.475798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628075 ] 00:06:47.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.877 [2024-06-07 23:03:10.526130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.877 [2024-06-07 23:03:10.554695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=0x1 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=crc32c 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=32 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=software 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=32 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=32 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=1 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val=Yes 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:48.138 23:03:10 -- accel/accel.sh@21 -- # val= 00:06:48.138 23:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # IFS=: 00:06:48.138 23:03:10 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@21 -- # val= 00:06:49.079 23:03:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # IFS=: 00:06:49.079 23:03:11 -- accel/accel.sh@20 -- # read -r var val 00:06:49.079 23:03:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.079 23:03:11 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:49.079 23:03:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.079 00:06:49.079 real 0m2.460s 00:06:49.079 user 0m2.261s 00:06:49.079 sys 0m0.203s 00:06:49.079 23:03:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.079 23:03:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.079 ************************************ 00:06:49.079 END TEST accel_crc32c 00:06:49.079 ************************************ 00:06:49.079 23:03:11 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:49.079 23:03:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:49.079 23:03:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.079 23:03:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.079 ************************************ 00:06:49.079 START TEST accel_crc32c_C2 00:06:49.079 ************************************ 00:06:49.079 23:03:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:49.079 23:03:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.079 23:03:11 -- accel/accel.sh@17 -- # local accel_module 00:06:49.079 23:03:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.079 23:03:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.079 23:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.079 23:03:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.079 23:03:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.079 23:03:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.079 23:03:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.079 23:03:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.079 23:03:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.079 23:03:11 -- accel/accel.sh@42 -- # jq -r . 00:06:49.079 [2024-06-07 23:03:11.738708] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:49.079 [2024-06-07 23:03:11.738802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628337 ] 00:06:49.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.340 [2024-06-07 23:03:11.801798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.340 [2024-06-07 23:03:11.832172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.281 23:03:12 -- accel/accel.sh@18 -- # out=' 00:06:50.281 SPDK Configuration: 00:06:50.281 Core mask: 0x1 00:06:50.281 00:06:50.281 Accel Perf Configuration: 00:06:50.281 Workload Type: crc32c 00:06:50.281 CRC-32C seed: 0 00:06:50.281 Transfer size: 4096 bytes 00:06:50.281 Vector count 2 00:06:50.281 Module: software 00:06:50.281 Queue depth: 32 00:06:50.281 Allocate depth: 32 00:06:50.281 # threads/core: 1 00:06:50.281 Run time: 1 seconds 00:06:50.281 Verify: Yes 00:06:50.281 00:06:50.281 Running for 1 seconds... 00:06:50.281 00:06:50.281 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.281 ------------------------------------------------------------------------------------ 00:06:50.281 0,0 372000/s 2906 MiB/s 0 0 00:06:50.281 ==================================================================================== 00:06:50.281 Total 372000/s 1453 MiB/s 0 0' 00:06:50.281 23:03:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.281 23:03:12 -- accel/accel.sh@20 -- # IFS=: 00:06:50.281 23:03:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.281 23:03:12 -- accel/accel.sh@20 -- # read -r var val 00:06:50.281 23:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.281 23:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.281 23:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.281 23:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.281 23:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.281 23:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.281 23:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.281 23:03:12 -- accel/accel.sh@42 -- # jq -r . 00:06:50.281 [2024-06-07 23:03:12.954224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.281 [2024-06-07 23:03:12.954273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628481 ] 00:06:50.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.543 [2024-06-07 23:03:13.005534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.543 [2024-06-07 23:03:13.033756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=0x1 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=crc32c 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=0 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=software 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=32 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=32 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=1 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val=Yes 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:50.543 23:03:13 -- accel/accel.sh@21 -- # val= 00:06:50.543 23:03:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # IFS=: 00:06:50.543 23:03:13 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@21 -- # val= 00:06:51.485 23:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # IFS=: 00:06:51.485 23:03:14 -- accel/accel.sh@20 -- # read -r var val 00:06:51.485 23:03:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.485 23:03:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:51.485 23:03:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.485 00:06:51.485 real 0m2.440s 00:06:51.485 user 0m2.268s 00:06:51.485 sys 0m0.179s 00:06:51.485 23:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.485 23:03:14 -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 ************************************ 00:06:51.485 END TEST accel_crc32c_C2 00:06:51.485 ************************************ 00:06:51.745 23:03:14 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.745 23:03:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.745 23:03:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.745 23:03:14 -- common/autotest_common.sh@10 -- # set +x 00:06:51.745 ************************************ 00:06:51.745 START TEST accel_copy 00:06:51.745 ************************************ 00:06:51.745 23:03:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:51.745 23:03:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.745 23:03:14 -- accel/accel.sh@17 -- # local accel_module 00:06:51.745 23:03:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:51.745 23:03:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.745 23:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.745 23:03:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.745 23:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.745 23:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.745 23:03:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.745 23:03:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.745 23:03:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.745 23:03:14 -- accel/accel.sh@42 -- # jq -r . 00:06:51.745 [2024-06-07 23:03:14.219529] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.745 [2024-06-07 23:03:14.219618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628832 ] 00:06:51.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.745 [2024-06-07 23:03:14.281141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.745 [2024-06-07 23:03:14.310376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.130 23:03:15 -- accel/accel.sh@18 -- # out=' 00:06:53.130 SPDK Configuration: 00:06:53.130 Core mask: 0x1 00:06:53.130 00:06:53.130 Accel Perf Configuration: 00:06:53.130 Workload Type: copy 00:06:53.130 Transfer size: 4096 bytes 00:06:53.130 Vector count 1 00:06:53.130 Module: software 00:06:53.130 Queue depth: 32 00:06:53.131 Allocate depth: 32 00:06:53.131 # threads/core: 1 00:06:53.131 Run time: 1 seconds 00:06:53.131 Verify: Yes 00:06:53.131 00:06:53.131 Running for 1 seconds... 00:06:53.131 00:06:53.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.131 ------------------------------------------------------------------------------------ 00:06:53.131 0,0 302112/s 1180 MiB/s 0 0 00:06:53.131 ==================================================================================== 00:06:53.131 Total 302112/s 1180 MiB/s 0 0' 00:06:53.131 23:03:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.131 23:03:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.131 23:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.131 23:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.131 23:03:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.131 23:03:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.131 23:03:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.131 23:03:15 -- accel/accel.sh@42 -- # jq -r . 00:06:53.131 [2024-06-07 23:03:15.432190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.131 [2024-06-07 23:03:15.432235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629166 ] 00:06:53.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.131 [2024-06-07 23:03:15.482488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.131 [2024-06-07 23:03:15.510487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=0x1 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=copy 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=software 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=32 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=32 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=1 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val=Yes 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.131 23:03:15 -- accel/accel.sh@21 -- # val= 00:06:53.131 23:03:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # IFS=: 00:06:53.131 23:03:15 -- accel/accel.sh@20 -- # read -r var val 00:06:54.073 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@21 -- # val= 00:06:54.074 23:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # IFS=: 00:06:54.074 23:03:16 -- accel/accel.sh@20 -- # read -r var val 00:06:54.074 23:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.074 23:03:16 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:54.074 23:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.074 00:06:54.074 real 0m2.436s 00:06:54.074 user 0m2.260s 00:06:54.074 sys 0m0.182s 00:06:54.074 23:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.074 23:03:16 -- common/autotest_common.sh@10 -- # set +x 00:06:54.074 ************************************ 00:06:54.074 END TEST accel_copy 00:06:54.074 ************************************ 00:06:54.074 23:03:16 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.074 23:03:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:54.074 23:03:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.074 23:03:16 -- common/autotest_common.sh@10 -- # set +x 00:06:54.074 ************************************ 00:06:54.074 START TEST accel_fill 00:06:54.074 ************************************ 00:06:54.074 23:03:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.074 23:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.074 23:03:16 -- accel/accel.sh@17 -- # local accel_module 00:06:54.074 23:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.074 23:03:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.074 23:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.074 23:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.074 23:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.074 23:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.074 23:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.074 23:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.074 23:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.074 23:03:16 -- accel/accel.sh@42 -- # jq -r . 00:06:54.074 [2024-06-07 23:03:16.694447] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.074 [2024-06-07 23:03:16.694535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629352 ] 00:06:54.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.335 [2024-06-07 23:03:16.757983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.335 [2024-06-07 23:03:16.788714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.276 23:03:17 -- accel/accel.sh@18 -- # out=' 00:06:55.276 SPDK Configuration: 00:06:55.276 Core mask: 0x1 00:06:55.276 00:06:55.276 Accel Perf Configuration: 00:06:55.276 Workload Type: fill 00:06:55.276 Fill pattern: 0x80 00:06:55.276 Transfer size: 4096 bytes 00:06:55.276 Vector count 1 00:06:55.276 Module: software 00:06:55.276 Queue depth: 64 00:06:55.276 Allocate depth: 64 00:06:55.276 # threads/core: 1 00:06:55.276 Run time: 1 seconds 00:06:55.276 Verify: Yes 00:06:55.276 00:06:55.276 Running for 1 seconds... 00:06:55.276 00:06:55.276 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.276 ------------------------------------------------------------------------------------ 00:06:55.276 0,0 463872/s 1812 MiB/s 0 0 00:06:55.276 ==================================================================================== 00:06:55.276 Total 463872/s 1812 MiB/s 0 0' 00:06:55.276 23:03:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.276 23:03:17 -- accel/accel.sh@20 -- # IFS=: 00:06:55.276 23:03:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.276 23:03:17 -- accel/accel.sh@20 -- # read -r var val 00:06:55.276 23:03:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.276 23:03:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.276 23:03:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.276 23:03:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.276 23:03:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.276 23:03:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.276 23:03:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.276 23:03:17 -- accel/accel.sh@42 -- # jq -r . 00:06:55.276 [2024-06-07 23:03:17.927858] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:55.276 [2024-06-07 23:03:17.927957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629540 ] 00:06:55.276 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.537 [2024-06-07 23:03:17.991115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.537 [2024-06-07 23:03:18.019719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val=0x1 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val=fill 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val=0x80 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.537 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.537 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.537 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val=software 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val=64 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val=64 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val=1 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val=Yes 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.538 23:03:18 -- accel/accel.sh@21 -- # val= 00:06:55.538 23:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.538 23:03:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.479 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.479 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.479 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.479 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.479 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.479 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.479 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.480 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.480 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.480 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.480 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.480 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.480 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.480 23:03:19 -- accel/accel.sh@21 -- # val= 00:06:56.480 23:03:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.480 23:03:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.480 23:03:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.480 23:03:19 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:56.480 23:03:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.480 00:06:56.480 real 0m2.470s 00:06:56.480 user 0m2.266s 00:06:56.480 sys 0m0.210s 00:06:56.480 23:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.480 23:03:19 -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 ************************************ 00:06:56.480 END TEST accel_fill 00:06:56.480 ************************************ 00:06:56.740 23:03:19 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.740 23:03:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.740 23:03:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.740 23:03:19 -- common/autotest_common.sh@10 -- # set +x 00:06:56.740 ************************************ 00:06:56.740 START TEST accel_copy_crc32c 00:06:56.740 ************************************ 00:06:56.740 23:03:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:56.740 23:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.740 23:03:19 -- accel/accel.sh@17 -- # local accel_module 00:06:56.740 23:03:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.740 23:03:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.740 23:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.740 23:03:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.740 23:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.740 23:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.740 23:03:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.740 23:03:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.740 23:03:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.740 23:03:19 -- accel/accel.sh@42 -- # jq -r . 00:06:56.740 [2024-06-07 23:03:19.202865] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.740 [2024-06-07 23:03:19.202940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629891 ] 00:06:56.740 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.740 [2024-06-07 23:03:19.264563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.740 [2024-06-07 23:03:19.294446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.127 23:03:20 -- accel/accel.sh@18 -- # out=' 00:06:58.127 SPDK Configuration: 00:06:58.127 Core mask: 0x1 00:06:58.127 00:06:58.127 Accel Perf Configuration: 00:06:58.127 Workload Type: copy_crc32c 00:06:58.127 CRC-32C seed: 0 00:06:58.127 Vector size: 4096 bytes 00:06:58.127 Transfer size: 4096 bytes 00:06:58.127 Vector count 1 00:06:58.127 Module: software 00:06:58.127 Queue depth: 32 00:06:58.127 Allocate depth: 32 00:06:58.127 # threads/core: 1 00:06:58.127 Run time: 1 seconds 00:06:58.127 Verify: Yes 00:06:58.127 00:06:58.127 Running for 1 seconds... 00:06:58.127 00:06:58.127 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.127 ------------------------------------------------------------------------------------ 00:06:58.127 0,0 247872/s 968 MiB/s 0 0 00:06:58.127 ==================================================================================== 00:06:58.127 Total 247872/s 968 MiB/s 0 0' 00:06:58.127 23:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.127 23:03:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.127 23:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.127 23:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.127 23:03:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.127 23:03:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.127 23:03:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.127 23:03:20 -- accel/accel.sh@42 -- # jq -r . 00:06:58.127 [2024-06-07 23:03:20.417349] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.127 [2024-06-07 23:03:20.417393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630227 ] 00:06:58.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.127 [2024-06-07 23:03:20.467569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.127 [2024-06-07 23:03:20.495644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=0x1 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=0 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=software 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=32 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=32 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=1 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val=Yes 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 23:03:20 -- accel/accel.sh@21 -- # val= 00:06:58.127 23:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 23:03:20 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@21 -- # val= 00:06:59.087 23:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # IFS=: 00:06:59.087 23:03:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.087 23:03:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.087 23:03:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:59.087 23:03:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.087 00:06:59.087 real 0m2.436s 00:06:59.087 user 0m2.253s 00:06:59.087 sys 0m0.189s 00:06:59.087 23:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.087 23:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:59.087 ************************************ 00:06:59.087 END TEST accel_copy_crc32c 00:06:59.087 ************************************ 00:06:59.087 23:03:21 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.087 23:03:21 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:59.087 23:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.087 23:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:59.087 ************************************ 00:06:59.087 START TEST accel_copy_crc32c_C2 00:06:59.087 ************************************ 00:06:59.087 23:03:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.087 23:03:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.087 23:03:21 -- accel/accel.sh@17 -- # local accel_module 00:06:59.087 23:03:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:59.087 23:03:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:59.087 23:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.087 23:03:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.087 23:03:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.087 23:03:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.087 23:03:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.087 23:03:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.087 23:03:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.087 23:03:21 -- accel/accel.sh@42 -- # jq -r . 00:06:59.087 [2024-06-07 23:03:21.683669] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.087 [2024-06-07 23:03:21.683769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630373 ] 00:06:59.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.087 [2024-06-07 23:03:21.746005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.430 [2024-06-07 23:03:21.776711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.372 23:03:22 -- accel/accel.sh@18 -- # out=' 00:07:00.372 SPDK Configuration: 00:07:00.372 Core mask: 0x1 00:07:00.372 00:07:00.372 Accel Perf Configuration: 00:07:00.372 Workload Type: copy_crc32c 00:07:00.372 CRC-32C seed: 0 00:07:00.372 Vector size: 4096 bytes 00:07:00.372 Transfer size: 8192 bytes 00:07:00.372 Vector count 2 00:07:00.372 Module: software 00:07:00.372 Queue depth: 32 00:07:00.372 Allocate depth: 32 00:07:00.372 # threads/core: 1 00:07:00.372 Run time: 1 seconds 00:07:00.372 Verify: Yes 00:07:00.372 00:07:00.372 Running for 1 seconds... 00:07:00.372 00:07:00.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.372 ------------------------------------------------------------------------------------ 00:07:00.372 0,0 187040/s 1461 MiB/s 0 0 00:07:00.372 ==================================================================================== 00:07:00.372 Total 187040/s 730 MiB/s 0 0' 00:07:00.372 23:03:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.372 23:03:22 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.372 23:03:22 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.372 23:03:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.372 23:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.372 23:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.372 23:03:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.372 23:03:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.372 23:03:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.372 23:03:22 -- accel/accel.sh@42 -- # jq -r . 00:07:00.372 [2024-06-07 23:03:22.915638] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:00.372 [2024-06-07 23:03:22.915711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630600 ] 00:07:00.372 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.372 [2024-06-07 23:03:22.977311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.372 [2024-06-07 23:03:23.005079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=0x1 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=0 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=software 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=32 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.372 23:03:23 -- accel/accel.sh@21 -- # val=32 00:07:00.372 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.372 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.373 23:03:23 -- accel/accel.sh@21 -- # val=1 00:07:00.373 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.373 23:03:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.373 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.373 23:03:23 -- accel/accel.sh@21 -- # val=Yes 00:07:00.373 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.373 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.373 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:00.373 23:03:23 -- accel/accel.sh@21 -- # val= 00:07:00.373 23:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # IFS=: 00:07:00.373 23:03:23 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@21 -- # val= 00:07:01.756 23:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # IFS=: 00:07:01.756 23:03:24 -- accel/accel.sh@20 -- # read -r var val 00:07:01.756 23:03:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.756 23:03:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:01.756 23:03:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.756 00:07:01.756 real 0m2.466s 00:07:01.757 user 0m2.268s 00:07:01.757 sys 0m0.205s 00:07:01.757 23:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.757 23:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:01.757 ************************************ 00:07:01.757 END TEST accel_copy_crc32c_C2 00:07:01.757 ************************************ 00:07:01.757 23:03:24 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:01.757 23:03:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.757 23:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.757 23:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:01.757 ************************************ 00:07:01.757 START TEST accel_dualcast 00:07:01.757 ************************************ 00:07:01.757 23:03:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:01.757 23:03:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.757 23:03:24 -- accel/accel.sh@17 -- # local accel_module 00:07:01.757 23:03:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:01.757 23:03:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:01.757 23:03:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.757 23:03:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.757 23:03:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.757 23:03:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.757 23:03:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.757 23:03:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.757 23:03:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.757 23:03:24 -- accel/accel.sh@42 -- # jq -r . 00:07:01.757 [2024-06-07 23:03:24.189948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.757 [2024-06-07 23:03:24.190049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630949 ] 00:07:01.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.757 [2024-06-07 23:03:24.252409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.757 [2024-06-07 23:03:24.279871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.142 23:03:25 -- accel/accel.sh@18 -- # out=' 00:07:03.143 SPDK Configuration: 00:07:03.143 Core mask: 0x1 00:07:03.143 00:07:03.143 Accel Perf Configuration: 00:07:03.143 Workload Type: dualcast 00:07:03.143 Transfer size: 4096 bytes 00:07:03.143 Vector count 1 00:07:03.143 Module: software 00:07:03.143 Queue depth: 32 00:07:03.143 Allocate depth: 32 00:07:03.143 # threads/core: 1 00:07:03.143 Run time: 1 seconds 00:07:03.143 Verify: Yes 00:07:03.143 00:07:03.143 Running for 1 seconds... 00:07:03.143 00:07:03.143 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.143 ------------------------------------------------------------------------------------ 00:07:03.143 0,0 360960/s 1410 MiB/s 0 0 00:07:03.143 ==================================================================================== 00:07:03.143 Total 360960/s 1410 MiB/s 0 0' 00:07:03.143 23:03:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.143 23:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.143 23:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.143 23:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.143 23:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.143 23:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.143 23:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.143 23:03:25 -- accel/accel.sh@42 -- # jq -r . 00:07:03.143 [2024-06-07 23:03:25.402386] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.143 [2024-06-07 23:03:25.402429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631278 ] 00:07:03.143 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.143 [2024-06-07 23:03:25.452981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.143 [2024-06-07 23:03:25.480903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=0x1 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=dualcast 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=software 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=32 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=32 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=1 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val=Yes 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:03.143 23:03:25 -- accel/accel.sh@21 -- # val= 00:07:03.143 23:03:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # IFS=: 00:07:03.143 23:03:25 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@21 -- # val= 00:07:04.086 23:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # IFS=: 00:07:04.086 23:03:26 -- accel/accel.sh@20 -- # read -r var val 00:07:04.086 23:03:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.086 23:03:26 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:04.086 23:03:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.086 00:07:04.086 real 0m2.437s 00:07:04.086 user 0m2.254s 00:07:04.086 sys 0m0.188s 00:07:04.086 23:03:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.086 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:04.086 ************************************ 00:07:04.086 END TEST accel_dualcast 00:07:04.086 ************************************ 00:07:04.086 23:03:26 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:04.086 23:03:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.086 23:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.086 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:07:04.086 ************************************ 00:07:04.086 START TEST accel_compare 00:07:04.086 ************************************ 00:07:04.086 23:03:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:04.086 23:03:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.086 23:03:26 -- accel/accel.sh@17 -- # local accel_module 00:07:04.086 23:03:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:04.086 23:03:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:04.086 23:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.086 23:03:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.086 23:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.086 23:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.086 23:03:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.086 23:03:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.086 23:03:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.086 23:03:26 -- accel/accel.sh@42 -- # jq -r . 00:07:04.086 [2024-06-07 23:03:26.664545] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:04.086 [2024-06-07 23:03:26.664632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631404 ] 00:07:04.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.086 [2024-06-07 23:03:26.727503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.086 [2024-06-07 23:03:26.758103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.473 23:03:27 -- accel/accel.sh@18 -- # out=' 00:07:05.473 SPDK Configuration: 00:07:05.473 Core mask: 0x1 00:07:05.473 00:07:05.473 Accel Perf Configuration: 00:07:05.473 Workload Type: compare 00:07:05.473 Transfer size: 4096 bytes 00:07:05.473 Vector count 1 00:07:05.473 Module: software 00:07:05.473 Queue depth: 32 00:07:05.473 Allocate depth: 32 00:07:05.473 # threads/core: 1 00:07:05.473 Run time: 1 seconds 00:07:05.473 Verify: Yes 00:07:05.473 00:07:05.473 Running for 1 seconds... 00:07:05.473 00:07:05.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.473 ------------------------------------------------------------------------------------ 00:07:05.473 0,0 427104/s 1668 MiB/s 0 0 00:07:05.473 ==================================================================================== 00:07:05.473 Total 427104/s 1668 MiB/s 0 0' 00:07:05.473 23:03:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:05.473 23:03:27 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:05.473 23:03:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.473 23:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.473 23:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.473 23:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.473 23:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.473 23:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.473 23:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.473 23:03:27 -- accel/accel.sh@42 -- # jq -r . 00:07:05.473 [2024-06-07 23:03:27.897004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:05.473 [2024-06-07 23:03:27.897102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631659 ] 00:07:05.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.473 [2024-06-07 23:03:27.958634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.473 [2024-06-07 23:03:27.986903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=0x1 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=compare 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=software 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=32 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=32 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=1 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.473 23:03:28 -- accel/accel.sh@21 -- # val=Yes 00:07:05.473 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.473 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.474 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.474 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.474 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.474 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.474 23:03:28 -- accel/accel.sh@21 -- # val= 00:07:05.474 23:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.474 23:03:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.474 23:03:28 -- accel/accel.sh@20 -- # read -r var val 00:07:06.419 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.419 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.419 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.419 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.419 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.419 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.419 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.419 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.419 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.679 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.679 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.679 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.679 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.679 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.679 23:03:29 -- accel/accel.sh@21 -- # val= 00:07:06.679 23:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.679 23:03:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.679 23:03:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.679 23:03:29 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:06.679 23:03:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.679 00:07:06.679 real 0m2.466s 00:07:06.679 user 0m2.266s 00:07:06.679 sys 0m0.204s 00:07:06.679 23:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.679 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:06.679 ************************************ 00:07:06.679 END TEST accel_compare 00:07:06.679 ************************************ 00:07:06.679 23:03:29 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:06.679 23:03:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.679 23:03:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.679 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:07:06.679 ************************************ 00:07:06.679 START TEST accel_xor 00:07:06.679 ************************************ 00:07:06.679 23:03:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:06.679 23:03:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.679 23:03:29 -- accel/accel.sh@17 -- # local accel_module 00:07:06.679 23:03:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:06.679 23:03:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:06.679 23:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.679 23:03:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.679 23:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.679 23:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.679 23:03:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.679 23:03:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.679 23:03:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.679 23:03:29 -- accel/accel.sh@42 -- # jq -r . 00:07:06.679 [2024-06-07 23:03:29.170518] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.679 [2024-06-07 23:03:29.170617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632011 ] 00:07:06.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.679 [2024-06-07 23:03:29.240890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.679 [2024-06-07 23:03:29.271431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.063 23:03:30 -- accel/accel.sh@18 -- # out=' 00:07:08.063 SPDK Configuration: 00:07:08.063 Core mask: 0x1 00:07:08.063 00:07:08.063 Accel Perf Configuration: 00:07:08.063 Workload Type: xor 00:07:08.063 Source buffers: 2 00:07:08.063 Transfer size: 4096 bytes 00:07:08.063 Vector count 1 00:07:08.063 Module: software 00:07:08.063 Queue depth: 32 00:07:08.063 Allocate depth: 32 00:07:08.063 # threads/core: 1 00:07:08.063 Run time: 1 seconds 00:07:08.063 Verify: Yes 00:07:08.063 00:07:08.063 Running for 1 seconds... 00:07:08.063 00:07:08.063 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.063 ------------------------------------------------------------------------------------ 00:07:08.063 0,0 360160/s 1406 MiB/s 0 0 00:07:08.063 ==================================================================================== 00:07:08.063 Total 360160/s 1406 MiB/s 0 0' 00:07:08.063 23:03:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.063 23:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.063 23:03:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.063 23:03:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.063 23:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.063 23:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.063 23:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.063 23:03:30 -- accel/accel.sh@42 -- # jq -r . 00:07:08.063 [2024-06-07 23:03:30.394310] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:08.063 [2024-06-07 23:03:30.394357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632302 ] 00:07:08.063 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.063 [2024-06-07 23:03:30.444577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.063 [2024-06-07 23:03:30.472676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=0x1 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=xor 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=2 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=software 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=32 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=32 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=1 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val=Yes 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.063 23:03:30 -- accel/accel.sh@21 -- # val= 00:07:08.063 23:03:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # IFS=: 00:07:08.063 23:03:30 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@21 -- # val= 00:07:09.008 23:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # IFS=: 00:07:09.008 23:03:31 -- accel/accel.sh@20 -- # read -r var val 00:07:09.008 23:03:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.008 23:03:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:09.008 23:03:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.008 00:07:09.008 real 0m2.446s 00:07:09.008 user 0m2.271s 00:07:09.008 sys 0m0.181s 00:07:09.008 23:03:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.008 23:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.008 ************************************ 00:07:09.008 END TEST accel_xor 00:07:09.008 ************************************ 00:07:09.008 23:03:31 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.008 23:03:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:09.008 23:03:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.008 23:03:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.008 ************************************ 00:07:09.008 START TEST accel_xor 00:07:09.008 ************************************ 00:07:09.008 23:03:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.008 23:03:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.008 23:03:31 -- accel/accel.sh@17 -- # local accel_module 00:07:09.008 23:03:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.008 23:03:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.008 23:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.008 23:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.008 23:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.008 23:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.008 23:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.008 23:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.008 23:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.008 23:03:31 -- accel/accel.sh@42 -- # jq -r . 00:07:09.008 [2024-06-07 23:03:31.656951] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:09.008 [2024-06-07 23:03:31.657048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632422 ] 00:07:09.008 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.268 [2024-06-07 23:03:31.718373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.268 [2024-06-07 23:03:31.748473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.210 23:03:32 -- accel/accel.sh@18 -- # out=' 00:07:10.210 SPDK Configuration: 00:07:10.210 Core mask: 0x1 00:07:10.210 00:07:10.210 Accel Perf Configuration: 00:07:10.210 Workload Type: xor 00:07:10.210 Source buffers: 3 00:07:10.210 Transfer size: 4096 bytes 00:07:10.210 Vector count 1 00:07:10.210 Module: software 00:07:10.210 Queue depth: 32 00:07:10.210 Allocate depth: 32 00:07:10.210 # threads/core: 1 00:07:10.210 Run time: 1 seconds 00:07:10.210 Verify: Yes 00:07:10.210 00:07:10.210 Running for 1 seconds... 00:07:10.210 00:07:10.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.210 ------------------------------------------------------------------------------------ 00:07:10.210 0,0 343264/s 1340 MiB/s 0 0 00:07:10.210 ==================================================================================== 00:07:10.210 Total 343264/s 1340 MiB/s 0 0' 00:07:10.210 23:03:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.210 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.210 23:03:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.210 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.210 23:03:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.210 23:03:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.210 23:03:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.210 23:03:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.210 23:03:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.210 23:03:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.210 23:03:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.210 23:03:32 -- accel/accel.sh@42 -- # jq -r . 00:07:10.210 [2024-06-07 23:03:32.870281] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:10.210 [2024-06-07 23:03:32.870324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632713 ] 00:07:10.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.471 [2024-06-07 23:03:32.920557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.471 [2024-06-07 23:03:32.948439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=0x1 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=xor 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=3 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=software 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=32 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=32 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=1 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val=Yes 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.471 23:03:32 -- accel/accel.sh@21 -- # val= 00:07:10.471 23:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # IFS=: 00:07:10.471 23:03:32 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@21 -- # val= 00:07:11.413 23:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # IFS=: 00:07:11.413 23:03:34 -- accel/accel.sh@20 -- # read -r var val 00:07:11.413 23:03:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.413 23:03:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:11.413 23:03:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.413 00:07:11.413 real 0m2.437s 00:07:11.413 user 0m2.253s 00:07:11.413 sys 0m0.188s 00:07:11.413 23:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.413 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.413 ************************************ 00:07:11.414 END TEST accel_xor 00:07:11.414 ************************************ 00:07:11.674 23:03:34 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:11.674 23:03:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:11.674 23:03:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.674 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.674 ************************************ 00:07:11.674 START TEST accel_dif_verify 00:07:11.674 ************************************ 00:07:11.674 23:03:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:11.674 23:03:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.674 23:03:34 -- accel/accel.sh@17 -- # local accel_module 00:07:11.674 23:03:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:11.674 23:03:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.674 23:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.674 23:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.674 23:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.674 23:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.674 23:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.674 23:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.674 23:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.674 23:03:34 -- accel/accel.sh@42 -- # jq -r . 00:07:11.674 [2024-06-07 23:03:34.136179] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:11.674 [2024-06-07 23:03:34.136456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633068 ] 00:07:11.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.674 [2024-06-07 23:03:34.197732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.674 [2024-06-07 23:03:34.227670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.059 23:03:35 -- accel/accel.sh@18 -- # out=' 00:07:13.059 SPDK Configuration: 00:07:13.059 Core mask: 0x1 00:07:13.059 00:07:13.059 Accel Perf Configuration: 00:07:13.059 Workload Type: dif_verify 00:07:13.059 Vector size: 4096 bytes 00:07:13.059 Transfer size: 4096 bytes 00:07:13.059 Block size: 512 bytes 00:07:13.059 Metadata size: 8 bytes 00:07:13.059 Vector count 1 00:07:13.059 Module: software 00:07:13.059 Queue depth: 32 00:07:13.059 Allocate depth: 32 00:07:13.059 # threads/core: 1 00:07:13.059 Run time: 1 seconds 00:07:13.059 Verify: No 00:07:13.059 00:07:13.059 Running for 1 seconds... 00:07:13.059 00:07:13.059 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.059 ------------------------------------------------------------------------------------ 00:07:13.059 0,0 94848/s 376 MiB/s 0 0 00:07:13.059 ==================================================================================== 00:07:13.059 Total 94848/s 370 MiB/s 0 0' 00:07:13.059 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.059 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.059 23:03:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:13.059 23:03:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.059 23:03:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.059 23:03:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.059 23:03:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.059 23:03:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.059 23:03:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.059 23:03:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.059 23:03:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.059 23:03:35 -- accel/accel.sh@42 -- # jq -r . 00:07:13.059 [2024-06-07 23:03:35.369600] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:13.059 [2024-06-07 23:03:35.369701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633288 ] 00:07:13.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.059 [2024-06-07 23:03:35.431011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.059 [2024-06-07 23:03:35.459570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.059 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.059 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.059 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.059 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=0x1 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=dif_verify 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=software 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=32 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=32 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=1 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val=No 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.060 23:03:35 -- accel/accel.sh@21 -- # val= 00:07:13.060 23:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # IFS=: 00:07:13.060 23:03:35 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@21 -- # val= 00:07:14.002 23:03:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # IFS=: 00:07:14.002 23:03:36 -- accel/accel.sh@20 -- # read -r var val 00:07:14.002 23:03:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.002 23:03:36 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:14.002 23:03:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.002 00:07:14.002 real 0m2.469s 00:07:14.002 user 0m2.281s 00:07:14.002 sys 0m0.197s 00:07:14.002 23:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.002 23:03:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.002 ************************************ 00:07:14.002 END TEST accel_dif_verify 00:07:14.002 ************************************ 00:07:14.002 23:03:36 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:14.002 23:03:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.002 23:03:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.002 23:03:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.002 ************************************ 00:07:14.002 START TEST accel_dif_generate 00:07:14.002 ************************************ 00:07:14.002 23:03:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:14.002 23:03:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.002 23:03:36 -- accel/accel.sh@17 -- # local accel_module 00:07:14.002 23:03:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:14.002 23:03:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.002 23:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.002 23:03:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.002 23:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.002 23:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.002 23:03:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.002 23:03:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.002 23:03:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.002 23:03:36 -- accel/accel.sh@42 -- # jq -r . 00:07:14.002 [2024-06-07 23:03:36.646596] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:14.002 [2024-06-07 23:03:36.646689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633454 ] 00:07:14.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.262 [2024-06-07 23:03:36.708884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.262 [2024-06-07 23:03:36.739141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.203 23:03:37 -- accel/accel.sh@18 -- # out=' 00:07:15.203 SPDK Configuration: 00:07:15.203 Core mask: 0x1 00:07:15.203 00:07:15.203 Accel Perf Configuration: 00:07:15.203 Workload Type: dif_generate 00:07:15.203 Vector size: 4096 bytes 00:07:15.203 Transfer size: 4096 bytes 00:07:15.203 Block size: 512 bytes 00:07:15.203 Metadata size: 8 bytes 00:07:15.203 Vector count 1 00:07:15.203 Module: software 00:07:15.203 Queue depth: 32 00:07:15.203 Allocate depth: 32 00:07:15.203 # threads/core: 1 00:07:15.203 Run time: 1 seconds 00:07:15.203 Verify: No 00:07:15.203 00:07:15.203 Running for 1 seconds... 00:07:15.203 00:07:15.203 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.203 ------------------------------------------------------------------------------------ 00:07:15.203 0,0 113248/s 449 MiB/s 0 0 00:07:15.203 ==================================================================================== 00:07:15.203 Total 113248/s 442 MiB/s 0 0' 00:07:15.203 23:03:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.203 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.203 23:03:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.203 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.203 23:03:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.203 23:03:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.203 23:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.203 23:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.204 23:03:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.204 23:03:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.204 23:03:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.204 23:03:37 -- accel/accel.sh@42 -- # jq -r . 00:07:15.204 [2024-06-07 23:03:37.861295] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:15.204 [2024-06-07 23:03:37.861338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633774 ] 00:07:15.204 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.464 [2024-06-07 23:03:37.911632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.464 [2024-06-07 23:03:37.939642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val=0x1 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val=dif_generate 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.464 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.464 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.464 23:03:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val=software 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val=32 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val=32 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val=1 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val=No 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.465 23:03:37 -- accel/accel.sh@21 -- # val= 00:07:15.465 23:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.465 23:03:37 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@21 -- # val= 00:07:16.407 23:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.407 23:03:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.407 23:03:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.407 23:03:39 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:16.407 23:03:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.407 00:07:16.407 real 0m2.436s 00:07:16.407 user 0m2.266s 00:07:16.407 sys 0m0.177s 00:07:16.407 23:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.407 23:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 ************************************ 00:07:16.407 END TEST accel_dif_generate 00:07:16.407 ************************************ 00:07:16.668 23:03:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:16.668 23:03:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:16.668 23:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.668 23:03:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.668 ************************************ 00:07:16.668 START TEST accel_dif_generate_copy 00:07:16.668 ************************************ 00:07:16.668 23:03:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:16.668 23:03:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.668 23:03:39 -- accel/accel.sh@17 -- # local accel_module 00:07:16.668 23:03:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:16.668 23:03:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:16.668 23:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.668 23:03:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.668 23:03:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.668 23:03:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.668 23:03:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.668 23:03:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.668 23:03:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.668 23:03:39 -- accel/accel.sh@42 -- # jq -r . 00:07:16.668 [2024-06-07 23:03:39.123797] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:16.668 [2024-06-07 23:03:39.123892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634129 ] 00:07:16.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.668 [2024-06-07 23:03:39.185742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.668 [2024-06-07 23:03:39.215643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.051 23:03:40 -- accel/accel.sh@18 -- # out=' 00:07:18.051 SPDK Configuration: 00:07:18.051 Core mask: 0x1 00:07:18.051 00:07:18.051 Accel Perf Configuration: 00:07:18.051 Workload Type: dif_generate_copy 00:07:18.051 Vector size: 4096 bytes 00:07:18.051 Transfer size: 4096 bytes 00:07:18.051 Vector count 1 00:07:18.051 Module: software 00:07:18.051 Queue depth: 32 00:07:18.051 Allocate depth: 32 00:07:18.051 # threads/core: 1 00:07:18.051 Run time: 1 seconds 00:07:18.051 Verify: No 00:07:18.051 00:07:18.051 Running for 1 seconds... 00:07:18.051 00:07:18.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.051 ------------------------------------------------------------------------------------ 00:07:18.051 0,0 87552/s 347 MiB/s 0 0 00:07:18.051 ==================================================================================== 00:07:18.051 Total 87552/s 342 MiB/s 0 0' 00:07:18.051 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.051 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.051 23:03:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:18.051 23:03:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:18.051 23:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.051 23:03:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.051 23:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.051 23:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.051 23:03:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.051 23:03:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.051 23:03:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.052 23:03:40 -- accel/accel.sh@42 -- # jq -r . 00:07:18.052 [2024-06-07 23:03:40.356560] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:18.052 [2024-06-07 23:03:40.356653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634300 ] 00:07:18.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.052 [2024-06-07 23:03:40.418367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.052 [2024-06-07 23:03:40.447159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=0x1 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=software 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=32 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=32 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=1 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val=No 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.052 23:03:40 -- accel/accel.sh@21 -- # val= 00:07:18.052 23:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.052 23:03:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@21 -- # val= 00:07:18.992 23:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.992 23:03:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.992 23:03:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.992 23:03:41 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:18.992 23:03:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.992 00:07:18.992 real 0m2.468s 00:07:18.992 user 0m2.273s 00:07:18.992 sys 0m0.202s 00:07:18.992 23:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.992 23:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:18.992 ************************************ 00:07:18.992 END TEST accel_dif_generate_copy 00:07:18.992 ************************************ 00:07:18.992 23:03:41 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:18.992 23:03:41 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.992 23:03:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:18.992 23:03:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.992 23:03:41 -- common/autotest_common.sh@10 -- # set +x 00:07:18.992 ************************************ 00:07:18.992 START TEST accel_comp 00:07:18.992 ************************************ 00:07:18.992 23:03:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.992 23:03:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.992 23:03:41 -- accel/accel.sh@17 -- # local accel_module 00:07:18.992 23:03:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.992 23:03:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.992 23:03:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.992 23:03:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.992 23:03:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.992 23:03:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.992 23:03:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.992 23:03:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.992 23:03:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.992 23:03:41 -- accel/accel.sh@42 -- # jq -r . 00:07:18.992 [2024-06-07 23:03:41.612151] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:18.992 [2024-06-07 23:03:41.612196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634493 ] 00:07:18.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.992 [2024-06-07 23:03:41.662594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.253 [2024-06-07 23:03:41.690570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.194 23:03:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.194 00:07:20.194 SPDK Configuration: 00:07:20.194 Core mask: 0x1 00:07:20.194 00:07:20.194 Accel Perf Configuration: 00:07:20.194 Workload Type: compress 00:07:20.194 Transfer size: 4096 bytes 00:07:20.194 Vector count 1 00:07:20.194 Module: software 00:07:20.194 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.194 Queue depth: 32 00:07:20.194 Allocate depth: 32 00:07:20.194 # threads/core: 1 00:07:20.194 Run time: 1 seconds 00:07:20.194 Verify: No 00:07:20.194 00:07:20.194 Running for 1 seconds... 00:07:20.194 00:07:20.194 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.194 ------------------------------------------------------------------------------------ 00:07:20.194 0,0 47296/s 197 MiB/s 0 0 00:07:20.194 ==================================================================================== 00:07:20.194 Total 47296/s 184 MiB/s 0 0' 00:07:20.194 23:03:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.194 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.194 23:03:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.194 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.194 23:03:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.194 23:03:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.194 23:03:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.194 23:03:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.194 23:03:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.194 23:03:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.194 23:03:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.194 23:03:42 -- accel/accel.sh@42 -- # jq -r . 00:07:20.194 [2024-06-07 23:03:42.815350] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:20.194 [2024-06-07 23:03:42.815394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634827 ] 00:07:20.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.194 [2024-06-07 23:03:42.865672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.455 [2024-06-07 23:03:42.893735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=0x1 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=compress 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=software 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=32 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=32 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=1 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val=No 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.455 23:03:42 -- accel/accel.sh@21 -- # val= 00:07:20.455 23:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.455 23:03:42 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@21 -- # val= 00:07:21.396 23:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.396 23:03:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.396 23:03:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.396 23:03:44 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:21.396 23:03:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.396 00:07:21.396 real 0m2.411s 00:07:21.396 user 0m2.251s 00:07:21.396 sys 0m0.166s 00:07:21.396 23:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.396 23:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:21.396 ************************************ 00:07:21.396 END TEST accel_comp 00:07:21.396 ************************************ 00:07:21.396 23:03:44 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.396 23:03:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:21.396 23:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.396 23:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:21.396 ************************************ 00:07:21.396 START TEST accel_decomp 00:07:21.396 ************************************ 00:07:21.396 23:03:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.396 23:03:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.396 23:03:44 -- accel/accel.sh@17 -- # local accel_module 00:07:21.396 23:03:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.396 23:03:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.396 23:03:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.396 23:03:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.396 23:03:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.396 23:03:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.396 23:03:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.396 23:03:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.396 23:03:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.396 23:03:44 -- accel/accel.sh@42 -- # jq -r . 00:07:21.656 [2024-06-07 23:03:44.079447] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:21.657 [2024-06-07 23:03:44.079522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635178 ] 00:07:21.657 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.657 [2024-06-07 23:03:44.140916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.657 [2024-06-07 23:03:44.170780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.043 23:03:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.043 00:07:23.043 SPDK Configuration: 00:07:23.043 Core mask: 0x1 00:07:23.043 00:07:23.043 Accel Perf Configuration: 00:07:23.043 Workload Type: decompress 00:07:23.043 Transfer size: 4096 bytes 00:07:23.043 Vector count 1 00:07:23.043 Module: software 00:07:23.043 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.043 Queue depth: 32 00:07:23.043 Allocate depth: 32 00:07:23.043 # threads/core: 1 00:07:23.044 Run time: 1 seconds 00:07:23.044 Verify: Yes 00:07:23.044 00:07:23.044 Running for 1 seconds... 00:07:23.044 00:07:23.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.044 ------------------------------------------------------------------------------------ 00:07:23.044 0,0 63264/s 116 MiB/s 0 0 00:07:23.044 ==================================================================================== 00:07:23.044 Total 63264/s 247 MiB/s 0 0' 00:07:23.044 23:03:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.044 23:03:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.044 23:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.044 23:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.044 23:03:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.044 23:03:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.044 23:03:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.044 23:03:45 -- accel/accel.sh@42 -- # jq -r . 00:07:23.044 [2024-06-07 23:03:45.295471] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:23.044 [2024-06-07 23:03:45.295516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635294 ] 00:07:23.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.044 [2024-06-07 23:03:45.345991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.044 [2024-06-07 23:03:45.374010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=0x1 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=decompress 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=software 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=32 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=32 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=1 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val=Yes 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.044 23:03:45 -- accel/accel.sh@21 -- # val= 00:07:23.044 23:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.044 23:03:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@21 -- # val= 00:07:23.986 23:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # IFS=: 00:07:23.986 23:03:46 -- accel/accel.sh@20 -- # read -r var val 00:07:23.986 23:03:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.986 23:03:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.986 23:03:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.986 00:07:23.986 real 0m2.441s 00:07:23.986 user 0m2.245s 00:07:23.986 sys 0m0.202s 00:07:23.986 23:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.986 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.986 ************************************ 00:07:23.986 END TEST accel_decomp 00:07:23.986 ************************************ 00:07:23.986 23:03:46 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.986 23:03:46 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:23.986 23:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.986 23:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.986 ************************************ 00:07:23.986 START TEST accel_decmop_full 00:07:23.986 ************************************ 00:07:23.986 23:03:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.986 23:03:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.986 23:03:46 -- accel/accel.sh@17 -- # local accel_module 00:07:23.986 23:03:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.986 23:03:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.986 23:03:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.986 23:03:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.986 23:03:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.986 23:03:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.986 23:03:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.986 23:03:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.986 23:03:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.986 23:03:46 -- accel/accel.sh@42 -- # jq -r . 00:07:23.986 [2024-06-07 23:03:46.561395] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:23.986 [2024-06-07 23:03:46.561482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635555 ] 00:07:23.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.986 [2024-06-07 23:03:46.623114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.986 [2024-06-07 23:03:46.651136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.372 23:03:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.372 00:07:25.372 SPDK Configuration: 00:07:25.372 Core mask: 0x1 00:07:25.372 00:07:25.372 Accel Perf Configuration: 00:07:25.372 Workload Type: decompress 00:07:25.372 Transfer size: 111250 bytes 00:07:25.372 Vector count 1 00:07:25.372 Module: software 00:07:25.372 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.372 Queue depth: 32 00:07:25.372 Allocate depth: 32 00:07:25.372 # threads/core: 1 00:07:25.372 Run time: 1 seconds 00:07:25.372 Verify: Yes 00:07:25.372 00:07:25.372 Running for 1 seconds... 00:07:25.372 00:07:25.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.372 ------------------------------------------------------------------------------------ 00:07:25.372 0,0 4096/s 169 MiB/s 0 0 00:07:25.372 ==================================================================================== 00:07:25.372 Total 4096/s 434 MiB/s 0 0' 00:07:25.372 23:03:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.372 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.372 23:03:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.372 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.372 23:03:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.372 23:03:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.372 23:03:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.372 23:03:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.372 23:03:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.372 23:03:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.372 23:03:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.372 23:03:47 -- accel/accel.sh@42 -- # jq -r . 00:07:25.372 [2024-06-07 23:03:47.790650] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:25.372 [2024-06-07 23:03:47.790694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2635889 ] 00:07:25.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.373 [2024-06-07 23:03:47.841045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.373 [2024-06-07 23:03:47.868813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=0x1 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=decompress 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=software 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=32 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=32 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=1 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val=Yes 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.373 23:03:47 -- accel/accel.sh@21 -- # val= 00:07:25.373 23:03:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.373 23:03:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.316 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.316 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.316 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.316 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.316 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.577 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.577 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.577 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.577 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.577 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.577 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.577 23:03:48 -- accel/accel.sh@21 -- # val= 00:07:26.577 23:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.577 23:03:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.577 23:03:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.577 23:03:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.577 23:03:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.577 00:07:26.577 real 0m2.468s 00:07:26.577 user 0m2.281s 00:07:26.577 sys 0m0.193s 00:07:26.577 23:03:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.577 23:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.577 ************************************ 00:07:26.577 END TEST accel_decmop_full 00:07:26.577 ************************************ 00:07:26.577 23:03:49 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.577 23:03:49 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:26.577 23:03:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.577 23:03:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.577 ************************************ 00:07:26.577 START TEST accel_decomp_mcore 00:07:26.577 ************************************ 00:07:26.577 23:03:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.577 23:03:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.577 23:03:49 -- accel/accel.sh@17 -- # local accel_module 00:07:26.577 23:03:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.577 23:03:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.577 23:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.577 23:03:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.577 23:03:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.577 23:03:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.577 23:03:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.577 23:03:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.577 23:03:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.577 23:03:49 -- accel/accel.sh@42 -- # jq -r . 00:07:26.577 [2024-06-07 23:03:49.067323] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:26.577 [2024-06-07 23:03:49.067420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636201 ] 00:07:26.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.577 [2024-06-07 23:03:49.129806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.577 [2024-06-07 23:03:49.161930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.577 [2024-06-07 23:03:49.162052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.577 [2024-06-07 23:03:49.162208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.577 [2024-06-07 23:03:49.162208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.965 23:03:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.965 00:07:27.965 SPDK Configuration: 00:07:27.965 Core mask: 0xf 00:07:27.965 00:07:27.965 Accel Perf Configuration: 00:07:27.965 Workload Type: decompress 00:07:27.965 Transfer size: 4096 bytes 00:07:27.965 Vector count 1 00:07:27.965 Module: software 00:07:27.965 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.965 Queue depth: 32 00:07:27.965 Allocate depth: 32 00:07:27.965 # threads/core: 1 00:07:27.965 Run time: 1 seconds 00:07:27.965 Verify: Yes 00:07:27.965 00:07:27.965 Running for 1 seconds... 00:07:27.965 00:07:27.965 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.965 ------------------------------------------------------------------------------------ 00:07:27.965 0,0 58208/s 107 MiB/s 0 0 00:07:27.965 3,0 58240/s 107 MiB/s 0 0 00:07:27.965 2,0 86080/s 158 MiB/s 0 0 00:07:27.965 1,0 58176/s 107 MiB/s 0 0 00:07:27.965 ==================================================================================== 00:07:27.965 Total 260704/s 1018 MiB/s 0 0' 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.965 23:03:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.965 23:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.965 23:03:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.965 23:03:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.965 23:03:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.965 23:03:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.965 23:03:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.965 23:03:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.965 23:03:50 -- accel/accel.sh@42 -- # jq -r . 00:07:27.965 [2024-06-07 23:03:50.309892] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:27.965 [2024-06-07 23:03:50.309965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636322 ] 00:07:27.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.965 [2024-06-07 23:03:50.371005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.965 [2024-06-07 23:03:50.402131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.965 [2024-06-07 23:03:50.402249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.965 [2024-06-07 23:03:50.402405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.965 [2024-06-07 23:03:50.402405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=0xf 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=decompress 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=software 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=32 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.965 23:03:50 -- accel/accel.sh@21 -- # val=32 00:07:27.965 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.965 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.966 23:03:50 -- accel/accel.sh@21 -- # val=1 00:07:27.966 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.966 23:03:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.966 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.966 23:03:50 -- accel/accel.sh@21 -- # val=Yes 00:07:27.966 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.966 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.966 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.966 23:03:50 -- accel/accel.sh@21 -- # val= 00:07:27.966 23:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.966 23:03:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@21 -- # val= 00:07:28.966 23:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.966 23:03:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.966 23:03:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.966 23:03:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.966 23:03:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.966 00:07:28.966 real 0m2.489s 00:07:28.966 user 0m8.744s 00:07:28.966 sys 0m0.220s 00:07:28.966 23:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.966 23:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.966 ************************************ 00:07:28.966 END TEST accel_decomp_mcore 00:07:28.966 ************************************ 00:07:28.966 23:03:51 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.966 23:03:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:28.966 23:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.966 23:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.966 ************************************ 00:07:28.966 START TEST accel_decomp_full_mcore 00:07:28.966 ************************************ 00:07:28.966 23:03:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.966 23:03:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.966 23:03:51 -- accel/accel.sh@17 -- # local accel_module 00:07:28.966 23:03:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.966 23:03:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.966 23:03:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.966 23:03:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.966 23:03:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.966 23:03:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.966 23:03:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.966 23:03:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.966 23:03:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.966 23:03:51 -- accel/accel.sh@42 -- # jq -r . 00:07:28.966 [2024-06-07 23:03:51.599611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:28.966 [2024-06-07 23:03:51.599704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636618 ] 00:07:28.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.227 [2024-06-07 23:03:51.662455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.227 [2024-06-07 23:03:51.695479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.227 [2024-06-07 23:03:51.695596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.227 [2024-06-07 23:03:51.695754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.227 [2024-06-07 23:03:51.695754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.168 23:03:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.168 00:07:30.168 SPDK Configuration: 00:07:30.168 Core mask: 0xf 00:07:30.168 00:07:30.168 Accel Perf Configuration: 00:07:30.168 Workload Type: decompress 00:07:30.168 Transfer size: 111250 bytes 00:07:30.168 Vector count 1 00:07:30.168 Module: software 00:07:30.168 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.168 Queue depth: 32 00:07:30.168 Allocate depth: 32 00:07:30.168 # threads/core: 1 00:07:30.168 Run time: 1 seconds 00:07:30.168 Verify: Yes 00:07:30.168 00:07:30.168 Running for 1 seconds... 00:07:30.168 00:07:30.168 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.168 ------------------------------------------------------------------------------------ 00:07:30.168 0,0 4064/s 167 MiB/s 0 0 00:07:30.168 3,0 4096/s 169 MiB/s 0 0 00:07:30.168 2,0 5920/s 244 MiB/s 0 0 00:07:30.168 1,0 4096/s 169 MiB/s 0 0 00:07:30.168 ==================================================================================== 00:07:30.168 Total 18176/s 1928 MiB/s 0 0' 00:07:30.168 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.168 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.168 23:03:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.168 23:03:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:30.168 23:03:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.168 23:03:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.169 23:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.169 23:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.169 23:03:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.169 23:03:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.169 23:03:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.169 23:03:52 -- accel/accel.sh@42 -- # jq -r . 00:07:30.429 [2024-06-07 23:03:52.852326] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:30.429 [2024-06-07 23:03:52.852411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636959 ] 00:07:30.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.429 [2024-06-07 23:03:52.915400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.429 [2024-06-07 23:03:52.946099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.429 [2024-06-07 23:03:52.946216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.429 [2024-06-07 23:03:52.946373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.429 [2024-06-07 23:03:52.946374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val=0xf 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val=decompress 00:07:30.429 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.429 23:03:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.429 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.429 23:03:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=software 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=32 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=32 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=1 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val=Yes 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:30.430 23:03:52 -- accel/accel.sh@21 -- # val= 00:07:30.430 23:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # IFS=: 00:07:30.430 23:03:52 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@21 -- # val= 00:07:31.813 23:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # IFS=: 00:07:31.813 23:03:54 -- accel/accel.sh@20 -- # read -r var val 00:07:31.813 23:03:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.813 23:03:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.813 23:03:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.813 00:07:31.813 real 0m2.512s 00:07:31.813 user 0m8.826s 00:07:31.813 sys 0m0.221s 00:07:31.813 23:03:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.813 23:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 ************************************ 00:07:31.813 END TEST accel_decomp_full_mcore 00:07:31.813 ************************************ 00:07:31.813 23:03:54 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.813 23:03:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:31.813 23:03:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.813 23:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:31.813 ************************************ 00:07:31.813 START TEST accel_decomp_mthread 00:07:31.813 ************************************ 00:07:31.813 23:03:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.813 23:03:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.813 23:03:54 -- accel/accel.sh@17 -- # local accel_module 00:07:31.813 23:03:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.813 23:03:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.813 23:03:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.813 23:03:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.813 23:03:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.813 23:03:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.813 23:03:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.813 23:03:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.813 23:03:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.813 23:03:54 -- accel/accel.sh@42 -- # jq -r . 00:07:31.813 [2024-06-07 23:03:54.153847] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:31.813 [2024-06-07 23:03:54.153934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637285 ] 00:07:31.813 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.813 [2024-06-07 23:03:54.216408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.813 [2024-06-07 23:03:54.246150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.754 23:03:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.754 00:07:32.754 SPDK Configuration: 00:07:32.754 Core mask: 0x1 00:07:32.754 00:07:32.754 Accel Perf Configuration: 00:07:32.754 Workload Type: decompress 00:07:32.754 Transfer size: 4096 bytes 00:07:32.754 Vector count 1 00:07:32.754 Module: software 00:07:32.754 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.754 Queue depth: 32 00:07:32.754 Allocate depth: 32 00:07:32.754 # threads/core: 2 00:07:32.754 Run time: 1 seconds 00:07:32.754 Verify: Yes 00:07:32.754 00:07:32.754 Running for 1 seconds... 00:07:32.754 00:07:32.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.754 ------------------------------------------------------------------------------------ 00:07:32.754 0,1 31840/s 58 MiB/s 0 0 00:07:32.754 0,0 31744/s 58 MiB/s 0 0 00:07:32.754 ==================================================================================== 00:07:32.754 Total 63584/s 248 MiB/s 0 0' 00:07:32.754 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:32.754 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:32.754 23:03:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.754 23:03:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.754 23:03:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.754 23:03:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.754 23:03:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.754 23:03:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.755 23:03:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.755 23:03:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.755 23:03:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.755 23:03:55 -- accel/accel.sh@42 -- # jq -r . 00:07:32.755 [2024-06-07 23:03:55.391518] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:32.755 [2024-06-07 23:03:55.391602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637411 ] 00:07:32.755 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.016 [2024-06-07 23:03:55.453487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.016 [2024-06-07 23:03:55.482598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=0x1 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=decompress 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=software 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=32 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=32 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=2 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val=Yes 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.016 23:03:55 -- accel/accel.sh@21 -- # val= 00:07:33.016 23:03:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # IFS=: 00:07:33.016 23:03:55 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@21 -- # val= 00:07:33.958 23:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # IFS=: 00:07:33.958 23:03:56 -- accel/accel.sh@20 -- # read -r var val 00:07:33.958 23:03:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.958 23:03:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.958 23:03:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.958 00:07:33.958 real 0m2.480s 00:07:33.958 user 0m2.290s 00:07:33.958 sys 0m0.198s 00:07:33.958 23:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.958 23:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.958 ************************************ 00:07:33.958 END TEST accel_decomp_mthread 00:07:33.958 ************************************ 00:07:34.218 23:03:56 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.218 23:03:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:34.218 23:03:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.218 23:03:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.218 ************************************ 00:07:34.218 START TEST accel_deomp_full_mthread 00:07:34.218 ************************************ 00:07:34.218 23:03:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.218 23:03:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.218 23:03:56 -- accel/accel.sh@17 -- # local accel_module 00:07:34.218 23:03:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.218 23:03:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.218 23:03:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.218 23:03:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.218 23:03:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.218 23:03:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.218 23:03:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.218 23:03:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.218 23:03:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.218 23:03:56 -- accel/accel.sh@42 -- # jq -r . 00:07:34.218 [2024-06-07 23:03:56.678217] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:34.218 [2024-06-07 23:03:56.678324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637698 ] 00:07:34.218 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.218 [2024-06-07 23:03:56.752072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.218 [2024-06-07 23:03:56.783162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.601 23:03:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.601 00:07:35.601 SPDK Configuration: 00:07:35.601 Core mask: 0x1 00:07:35.601 00:07:35.601 Accel Perf Configuration: 00:07:35.601 Workload Type: decompress 00:07:35.601 Transfer size: 111250 bytes 00:07:35.601 Vector count 1 00:07:35.601 Module: software 00:07:35.601 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.601 Queue depth: 32 00:07:35.601 Allocate depth: 32 00:07:35.601 # threads/core: 2 00:07:35.601 Run time: 1 seconds 00:07:35.601 Verify: Yes 00:07:35.601 00:07:35.601 Running for 1 seconds... 00:07:35.601 00:07:35.601 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.601 ------------------------------------------------------------------------------------ 00:07:35.601 0,1 2080/s 85 MiB/s 0 0 00:07:35.601 0,0 2048/s 84 MiB/s 0 0 00:07:35.601 ==================================================================================== 00:07:35.601 Total 4128/s 437 MiB/s 0 0' 00:07:35.601 23:03:57 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:57 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.601 23:03:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.601 23:03:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.601 23:03:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.601 23:03:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.601 23:03:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.601 23:03:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.601 23:03:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.601 23:03:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.601 23:03:57 -- accel/accel.sh@42 -- # jq -r . 00:07:35.601 [2024-06-07 23:03:57.952764] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:35.601 [2024-06-07 23:03:57.952855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638032 ] 00:07:35.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.601 [2024-06-07 23:03:58.014405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.601 [2024-06-07 23:03:58.043021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val=0x1 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.601 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.601 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.601 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=decompress 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=software 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=32 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=32 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=2 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val=Yes 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:35.602 23:03:58 -- accel/accel.sh@21 -- # val= 00:07:35.602 23:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # IFS=: 00:07:35.602 23:03:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.543 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.543 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@21 -- # val= 00:07:36.544 23:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # IFS=: 00:07:36.544 23:03:59 -- accel/accel.sh@20 -- # read -r var val 00:07:36.544 23:03:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.544 23:03:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.544 23:03:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.544 00:07:36.544 real 0m2.548s 00:07:36.544 user 0m2.351s 00:07:36.544 sys 0m0.203s 00:07:36.544 23:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.544 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.544 ************************************ 00:07:36.544 END TEST accel_deomp_full_mthread 00:07:36.544 ************************************ 00:07:36.804 23:03:59 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:36.804 23:03:59 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.804 23:03:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:36.804 23:03:59 -- accel/accel.sh@129 -- # build_accel_config 00:07:36.804 23:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.804 23:03:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.804 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:36.804 23:03:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.804 23:03:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.804 23:03:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.804 23:03:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.804 23:03:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.804 23:03:59 -- accel/accel.sh@42 -- # jq -r . 00:07:36.804 ************************************ 00:07:36.804 START TEST accel_dif_functional_tests 00:07:36.804 ************************************ 00:07:36.804 23:03:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.804 [2024-06-07 23:03:59.286063] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:36.804 [2024-06-07 23:03:59.286122] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638380 ] 00:07:36.804 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.804 [2024-06-07 23:03:59.346321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.804 [2024-06-07 23:03:59.378640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.804 [2024-06-07 23:03:59.378755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.804 [2024-06-07 23:03:59.378757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.804 00:07:36.804 00:07:36.804 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.804 http://cunit.sourceforge.net/ 00:07:36.804 00:07:36.804 00:07:36.804 Suite: accel_dif 00:07:36.804 Test: verify: DIF generated, GUARD check ...passed 00:07:36.804 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.804 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.804 Test: verify: DIF not generated, GUARD check ...[2024-06-07 23:03:59.427880] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.804 [2024-06-07 23:03:59.427920] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.804 passed 00:07:36.804 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 23:03:59.427948] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.804 [2024-06-07 23:03:59.427963] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.804 passed 00:07:36.804 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 23:03:59.427978] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.804 [2024-06-07 23:03:59.427992] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.804 passed 00:07:36.805 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.805 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 23:03:59.428033] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.805 passed 00:07:36.805 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.805 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.805 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.805 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 23:03:59.428144] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.805 passed 00:07:36.805 Test: generate copy: DIF generated, GUARD check ...passed 00:07:36.805 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.805 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.805 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.805 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.805 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.805 Test: generate copy: iovecs-len validate ...[2024-06-07 23:03:59.428352] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.805 passed 00:07:36.805 Test: generate copy: buffer alignment validate ...passed 00:07:36.805 00:07:36.805 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.805 suites 1 1 n/a 0 0 00:07:36.805 tests 20 20 20 0 0 00:07:36.805 asserts 204 204 204 0 n/a 00:07:36.805 00:07:36.805 Elapsed time = 0.002 seconds 00:07:37.066 00:07:37.066 real 0m0.289s 00:07:37.066 user 0m0.427s 00:07:37.066 sys 0m0.111s 00:07:37.066 23:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.066 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.066 ************************************ 00:07:37.066 END TEST accel_dif_functional_tests 00:07:37.066 ************************************ 00:07:37.066 00:07:37.066 real 0m52.328s 00:07:37.066 user 1m1.009s 00:07:37.066 sys 0m5.492s 00:07:37.066 23:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.066 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.066 ************************************ 00:07:37.066 END TEST accel 00:07:37.066 ************************************ 00:07:37.066 23:03:59 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:37.066 23:03:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.066 23:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.066 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.066 ************************************ 00:07:37.066 START TEST accel_rpc 00:07:37.066 ************************************ 00:07:37.066 23:03:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:37.066 * Looking for test storage... 00:07:37.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:37.066 23:03:59 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:37.066 23:03:59 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2638455 00:07:37.066 23:03:59 -- accel/accel_rpc.sh@15 -- # waitforlisten 2638455 00:07:37.066 23:03:59 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:37.066 23:03:59 -- common/autotest_common.sh@819 -- # '[' -z 2638455 ']' 00:07:37.066 23:03:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.066 23:03:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.066 23:03:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.066 23:03:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.066 23:03:59 -- common/autotest_common.sh@10 -- # set +x 00:07:37.327 [2024-06-07 23:03:59.757977] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:37.327 [2024-06-07 23:03:59.758055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638455 ] 00:07:37.327 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.327 [2024-06-07 23:03:59.822781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.327 [2024-06-07 23:03:59.859899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.327 [2024-06-07 23:03:59.860059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.898 23:04:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.898 23:04:00 -- common/autotest_common.sh@852 -- # return 0 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.898 23:04:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.898 23:04:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.898 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.898 ************************************ 00:07:37.898 START TEST accel_assign_opcode 00:07:37.898 ************************************ 00:07:37.898 23:04:00 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.898 23:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.898 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.898 [2024-06-07 23:04:00.525986] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.898 23:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.898 23:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.898 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.898 [2024-06-07 23:04:00.538011] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.898 23:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.898 23:04:00 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.898 23:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.898 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:38.159 23:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.159 23:04:00 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:38.159 23:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.159 23:04:00 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:38.159 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:38.159 23:04:00 -- accel/accel_rpc.sh@42 -- # grep software 00:07:38.159 23:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.159 software 00:07:38.159 00:07:38.159 real 0m0.205s 00:07:38.159 user 0m0.048s 00:07:38.159 sys 0m0.010s 00:07:38.159 23:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.159 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:38.159 ************************************ 00:07:38.159 END TEST accel_assign_opcode 00:07:38.159 ************************************ 00:07:38.159 23:04:00 -- accel/accel_rpc.sh@55 -- # killprocess 2638455 00:07:38.159 23:04:00 -- common/autotest_common.sh@926 -- # '[' -z 2638455 ']' 00:07:38.159 23:04:00 -- common/autotest_common.sh@930 -- # kill -0 2638455 00:07:38.159 23:04:00 -- common/autotest_common.sh@931 -- # uname 00:07:38.159 23:04:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:38.159 23:04:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2638455 00:07:38.159 23:04:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:38.159 23:04:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:38.159 23:04:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2638455' 00:07:38.159 killing process with pid 2638455 00:07:38.159 23:04:00 -- common/autotest_common.sh@945 -- # kill 2638455 00:07:38.159 23:04:00 -- common/autotest_common.sh@950 -- # wait 2638455 00:07:38.420 00:07:38.420 real 0m1.402s 00:07:38.420 user 0m1.457s 00:07:38.420 sys 0m0.384s 00:07:38.420 23:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.420 23:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.420 ************************************ 00:07:38.420 END TEST accel_rpc 00:07:38.420 ************************************ 00:07:38.420 23:04:01 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.420 23:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.420 23:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.420 23:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.420 ************************************ 00:07:38.420 START TEST app_cmdline 00:07:38.420 ************************************ 00:07:38.420 23:04:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.680 * Looking for test storage... 00:07:38.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.680 23:04:01 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.680 23:04:01 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2638856 00:07:38.680 23:04:01 -- app/cmdline.sh@18 -- # waitforlisten 2638856 00:07:38.680 23:04:01 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.680 23:04:01 -- common/autotest_common.sh@819 -- # '[' -z 2638856 ']' 00:07:38.680 23:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.680 23:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:38.680 23:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.680 23:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:38.680 23:04:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.680 [2024-06-07 23:04:01.202194] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:38.680 [2024-06-07 23:04:01.202277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2638856 ] 00:07:38.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.680 [2024-06-07 23:04:01.269271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.680 [2024-06-07 23:04:01.305854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.680 [2024-06-07 23:04:01.306035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.622 23:04:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.622 23:04:01 -- common/autotest_common.sh@852 -- # return 0 00:07:39.622 23:04:01 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:39.622 { 00:07:39.622 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:39.622 "fields": { 00:07:39.622 "major": 24, 00:07:39.622 "minor": 1, 00:07:39.622 "patch": 1, 00:07:39.622 "suffix": "-pre", 00:07:39.622 "commit": "130b9406a" 00:07:39.622 } 00:07:39.622 } 00:07:39.622 23:04:02 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:39.622 23:04:02 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:39.622 23:04:02 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:39.622 23:04:02 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:39.622 23:04:02 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:39.622 23:04:02 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:39.622 23:04:02 -- app/cmdline.sh@26 -- # sort 00:07:39.622 23:04:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.622 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:39.622 23:04:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.622 23:04:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:39.622 23:04:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:39.622 23:04:02 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.622 23:04:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:39.622 23:04:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.622 23:04:02 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.622 23:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:39.622 23:04:02 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.622 23:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:39.622 23:04:02 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.622 23:04:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:39.622 23:04:02 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.622 23:04:02 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.622 23:04:02 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.622 request: 00:07:39.622 { 00:07:39.622 "method": "env_dpdk_get_mem_stats", 00:07:39.622 "req_id": 1 00:07:39.622 } 00:07:39.622 Got JSON-RPC error response 00:07:39.622 response: 00:07:39.622 { 00:07:39.622 "code": -32601, 00:07:39.622 "message": "Method not found" 00:07:39.622 } 00:07:39.882 23:04:02 -- common/autotest_common.sh@643 -- # es=1 00:07:39.882 23:04:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:39.882 23:04:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:39.882 23:04:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:39.882 23:04:02 -- app/cmdline.sh@1 -- # killprocess 2638856 00:07:39.882 23:04:02 -- common/autotest_common.sh@926 -- # '[' -z 2638856 ']' 00:07:39.883 23:04:02 -- common/autotest_common.sh@930 -- # kill -0 2638856 00:07:39.883 23:04:02 -- common/autotest_common.sh@931 -- # uname 00:07:39.883 23:04:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.883 23:04:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2638856 00:07:39.883 23:04:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:39.883 23:04:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:39.883 23:04:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2638856' 00:07:39.883 killing process with pid 2638856 00:07:39.883 23:04:02 -- common/autotest_common.sh@945 -- # kill 2638856 00:07:39.883 23:04:02 -- common/autotest_common.sh@950 -- # wait 2638856 00:07:39.883 00:07:39.883 real 0m1.499s 00:07:39.883 user 0m1.774s 00:07:39.883 sys 0m0.406s 00:07:39.883 23:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.883 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:39.883 ************************************ 00:07:39.883 END TEST app_cmdline 00:07:39.883 ************************************ 00:07:40.143 23:04:02 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:40.143 23:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.143 23:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.143 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.143 ************************************ 00:07:40.143 START TEST version 00:07:40.143 ************************************ 00:07:40.143 23:04:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:40.143 * Looking for test storage... 00:07:40.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:40.144 23:04:02 -- app/version.sh@17 -- # get_header_version major 00:07:40.144 23:04:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:40.144 23:04:02 -- app/version.sh@14 -- # cut -f2 00:07:40.144 23:04:02 -- app/version.sh@14 -- # tr -d '"' 00:07:40.144 23:04:02 -- app/version.sh@17 -- # major=24 00:07:40.144 23:04:02 -- app/version.sh@18 -- # get_header_version minor 00:07:40.144 23:04:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:40.144 23:04:02 -- app/version.sh@14 -- # cut -f2 00:07:40.144 23:04:02 -- app/version.sh@14 -- # tr -d '"' 00:07:40.144 23:04:02 -- app/version.sh@18 -- # minor=1 00:07:40.144 23:04:02 -- app/version.sh@19 -- # get_header_version patch 00:07:40.144 23:04:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:40.144 23:04:02 -- app/version.sh@14 -- # cut -f2 00:07:40.144 23:04:02 -- app/version.sh@14 -- # tr -d '"' 00:07:40.144 23:04:02 -- app/version.sh@19 -- # patch=1 00:07:40.144 23:04:02 -- app/version.sh@20 -- # get_header_version suffix 00:07:40.144 23:04:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:40.144 23:04:02 -- app/version.sh@14 -- # cut -f2 00:07:40.144 23:04:02 -- app/version.sh@14 -- # tr -d '"' 00:07:40.144 23:04:02 -- app/version.sh@20 -- # suffix=-pre 00:07:40.144 23:04:02 -- app/version.sh@22 -- # version=24.1 00:07:40.144 23:04:02 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:40.144 23:04:02 -- app/version.sh@25 -- # version=24.1.1 00:07:40.144 23:04:02 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:40.144 23:04:02 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:40.144 23:04:02 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:40.144 23:04:02 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:40.144 23:04:02 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:40.144 00:07:40.144 real 0m0.165s 00:07:40.144 user 0m0.079s 00:07:40.144 sys 0m0.124s 00:07:40.144 23:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.144 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.144 ************************************ 00:07:40.144 END TEST version 00:07:40.144 ************************************ 00:07:40.144 23:04:02 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:40.144 23:04:02 -- spdk/autotest.sh@204 -- # uname -s 00:07:40.144 23:04:02 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:40.144 23:04:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:40.144 23:04:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:40.144 23:04:02 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:40.144 23:04:02 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:40.144 23:04:02 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:40.144 23:04:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.144 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.405 23:04:02 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:40.405 23:04:02 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:40.405 23:04:02 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:40.405 23:04:02 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:40.405 23:04:02 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:40.405 23:04:02 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:40.406 23:04:02 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:40.406 23:04:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:40.406 23:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.406 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.406 ************************************ 00:07:40.406 START TEST nvmf_tcp 00:07:40.406 ************************************ 00:07:40.406 23:04:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:40.406 * Looking for test storage... 00:07:40.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.406 23:04:02 -- nvmf/common.sh@7 -- # uname -s 00:07:40.406 23:04:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.406 23:04:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.406 23:04:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.406 23:04:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.406 23:04:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.406 23:04:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.406 23:04:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.406 23:04:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.406 23:04:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.406 23:04:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.406 23:04:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:40.406 23:04:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:40.406 23:04:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.406 23:04:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.406 23:04:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.406 23:04:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.406 23:04:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.406 23:04:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.406 23:04:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.406 23:04:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.406 23:04:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.406 23:04:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.406 23:04:02 -- paths/export.sh@5 -- # export PATH 00:07:40.406 23:04:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.406 23:04:02 -- nvmf/common.sh@46 -- # : 0 00:07:40.406 23:04:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.406 23:04:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.406 23:04:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.406 23:04:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.406 23:04:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.406 23:04:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.406 23:04:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.406 23:04:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:40.406 23:04:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:40.406 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:40.406 23:04:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:40.406 23:04:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:40.406 23:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.406 23:04:02 -- common/autotest_common.sh@10 -- # set +x 00:07:40.406 ************************************ 00:07:40.406 START TEST nvmf_example 00:07:40.406 ************************************ 00:07:40.406 23:04:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:40.406 * Looking for test storage... 00:07:40.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.667 23:04:03 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.667 23:04:03 -- nvmf/common.sh@7 -- # uname -s 00:07:40.667 23:04:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.667 23:04:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.667 23:04:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.667 23:04:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.667 23:04:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.667 23:04:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.667 23:04:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.667 23:04:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.667 23:04:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.667 23:04:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.667 23:04:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:40.667 23:04:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:40.667 23:04:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.667 23:04:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.667 23:04:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.667 23:04:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.667 23:04:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.667 23:04:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.667 23:04:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.667 23:04:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.667 23:04:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.667 23:04:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.667 23:04:03 -- paths/export.sh@5 -- # export PATH 00:07:40.667 23:04:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.667 23:04:03 -- nvmf/common.sh@46 -- # : 0 00:07:40.667 23:04:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:40.667 23:04:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:40.667 23:04:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:40.667 23:04:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.667 23:04:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.667 23:04:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:40.667 23:04:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:40.667 23:04:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:40.667 23:04:03 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:40.667 23:04:03 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:40.667 23:04:03 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:40.668 23:04:03 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:40.668 23:04:03 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:40.668 23:04:03 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:40.668 23:04:03 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:40.668 23:04:03 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:40.668 23:04:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:40.668 23:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:40.668 23:04:03 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:40.668 23:04:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:40.668 23:04:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.668 23:04:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:40.668 23:04:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:40.668 23:04:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:40.668 23:04:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.668 23:04:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.668 23:04:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.668 23:04:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:40.668 23:04:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:40.668 23:04:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:40.668 23:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.810 23:04:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:48.810 23:04:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:48.810 23:04:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:48.810 23:04:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:48.810 23:04:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:48.810 23:04:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:48.810 23:04:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:48.810 23:04:10 -- nvmf/common.sh@294 -- # net_devs=() 00:07:48.810 23:04:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:48.810 23:04:10 -- nvmf/common.sh@295 -- # e810=() 00:07:48.810 23:04:10 -- nvmf/common.sh@295 -- # local -ga e810 00:07:48.810 23:04:10 -- nvmf/common.sh@296 -- # x722=() 00:07:48.810 23:04:10 -- nvmf/common.sh@296 -- # local -ga x722 00:07:48.810 23:04:10 -- nvmf/common.sh@297 -- # mlx=() 00:07:48.810 23:04:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:48.810 23:04:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.810 23:04:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:48.810 23:04:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:48.810 23:04:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:48.810 23:04:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.810 23:04:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:48.810 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:48.810 23:04:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.810 23:04:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:48.810 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:48.810 23:04:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:48.810 23:04:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.810 23:04:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.810 23:04:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.810 23:04:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.810 23:04:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:48.810 Found net devices under 0000:31:00.0: cvl_0_0 00:07:48.810 23:04:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.810 23:04:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.810 23:04:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.810 23:04:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.810 23:04:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.810 23:04:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:48.810 Found net devices under 0000:31:00.1: cvl_0_1 00:07:48.810 23:04:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.810 23:04:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:48.810 23:04:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:48.810 23:04:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:48.810 23:04:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:48.811 23:04:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:48.811 23:04:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.811 23:04:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.811 23:04:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.811 23:04:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:48.811 23:04:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.811 23:04:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.811 23:04:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:48.811 23:04:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.811 23:04:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.811 23:04:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:48.811 23:04:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:48.811 23:04:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.811 23:04:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.811 23:04:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.811 23:04:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.811 23:04:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:48.811 23:04:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.811 23:04:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.811 23:04:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.811 23:04:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:48.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.866 ms 00:07:48.811 00:07:48.811 --- 10.0.0.2 ping statistics --- 00:07:48.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.811 rtt min/avg/max/mdev = 0.866/0.866/0.866/0.000 ms 00:07:48.811 23:04:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:07:48.811 00:07:48.811 --- 10.0.0.1 ping statistics --- 00:07:48.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.811 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:48.811 23:04:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.811 23:04:10 -- nvmf/common.sh@410 -- # return 0 00:07:48.811 23:04:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.811 23:04:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.811 23:04:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.811 23:04:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.811 23:04:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.811 23:04:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.811 23:04:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.811 23:04:10 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:48.811 23:04:10 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:48.811 23:04:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.811 23:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:10 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:48.811 23:04:10 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:48.811 23:04:10 -- target/nvmf_example.sh@34 -- # nvmfpid=2643028 00:07:48.811 23:04:10 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.811 23:04:10 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:48.811 23:04:10 -- target/nvmf_example.sh@36 -- # waitforlisten 2643028 00:07:48.811 23:04:10 -- common/autotest_common.sh@819 -- # '[' -z 2643028 ']' 00:07:48.811 23:04:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.811 23:04:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.811 23:04:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.811 23:04:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.811 23:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.811 23:04:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:48.811 23:04:11 -- common/autotest_common.sh@852 -- # return 0 00:07:48.811 23:04:11 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:48.811 23:04:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.811 23:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.811 23:04:11 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:48.811 23:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.811 23:04:11 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:48.811 23:04:11 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.811 23:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.811 23:04:11 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:48.811 23:04:11 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.811 23:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.811 23:04:11 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.811 23:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.811 23:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.811 23:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.811 23:04:11 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:48.811 23:04:11 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:48.811 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.044 Initializing NVMe Controllers 00:08:01.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:01.044 Initialization complete. Launching workers. 00:08:01.044 ======================================================== 00:08:01.044 Latency(us) 00:08:01.044 Device Information : IOPS MiB/s Average min max 00:08:01.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19046.28 74.40 3359.81 588.30 16003.50 00:08:01.044 ======================================================== 00:08:01.044 Total : 19046.28 74.40 3359.81 588.30 16003.50 00:08:01.044 00:08:01.044 23:04:21 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:01.044 23:04:21 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:01.044 23:04:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:01.044 23:04:21 -- nvmf/common.sh@116 -- # sync 00:08:01.044 23:04:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:01.044 23:04:21 -- nvmf/common.sh@119 -- # set +e 00:08:01.044 23:04:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:01.044 23:04:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:01.044 rmmod nvme_tcp 00:08:01.044 rmmod nvme_fabrics 00:08:01.044 rmmod nvme_keyring 00:08:01.044 23:04:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:01.044 23:04:21 -- nvmf/common.sh@123 -- # set -e 00:08:01.044 23:04:21 -- nvmf/common.sh@124 -- # return 0 00:08:01.044 23:04:21 -- nvmf/common.sh@477 -- # '[' -n 2643028 ']' 00:08:01.044 23:04:21 -- nvmf/common.sh@478 -- # killprocess 2643028 00:08:01.044 23:04:21 -- common/autotest_common.sh@926 -- # '[' -z 2643028 ']' 00:08:01.044 23:04:21 -- common/autotest_common.sh@930 -- # kill -0 2643028 00:08:01.044 23:04:21 -- common/autotest_common.sh@931 -- # uname 00:08:01.044 23:04:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:01.044 23:04:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2643028 00:08:01.044 23:04:21 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:01.044 23:04:21 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:01.044 23:04:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2643028' 00:08:01.044 killing process with pid 2643028 00:08:01.044 23:04:21 -- common/autotest_common.sh@945 -- # kill 2643028 00:08:01.044 23:04:21 -- common/autotest_common.sh@950 -- # wait 2643028 00:08:01.044 nvmf threads initialize successfully 00:08:01.044 bdev subsystem init successfully 00:08:01.044 created a nvmf target service 00:08:01.044 create targets's poll groups done 00:08:01.044 all subsystems of target started 00:08:01.044 nvmf target is running 00:08:01.044 all subsystems of target stopped 00:08:01.044 destroy targets's poll groups done 00:08:01.044 destroyed the nvmf target service 00:08:01.044 bdev subsystem finish successfully 00:08:01.044 nvmf threads destroy successfully 00:08:01.044 23:04:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:01.044 23:04:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:01.044 23:04:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:01.044 23:04:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.044 23:04:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:01.044 23:04:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.044 23:04:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.044 23:04:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.305 23:04:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:01.305 23:04:23 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:01.305 23:04:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:01.305 23:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:01.568 00:08:01.568 real 0m21.016s 00:08:01.568 user 0m46.901s 00:08:01.568 sys 0m6.322s 00:08:01.568 23:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.568 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:01.568 ************************************ 00:08:01.568 END TEST nvmf_example 00:08:01.568 ************************************ 00:08:01.568 23:04:24 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:01.568 23:04:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:01.568 23:04:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.568 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:01.568 ************************************ 00:08:01.568 START TEST nvmf_filesystem 00:08:01.568 ************************************ 00:08:01.568 23:04:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:01.568 * Looking for test storage... 00:08:01.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.568 23:04:24 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:01.568 23:04:24 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:01.568 23:04:24 -- common/autotest_common.sh@34 -- # set -e 00:08:01.568 23:04:24 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:01.568 23:04:24 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:01.568 23:04:24 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:01.568 23:04:24 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:01.568 23:04:24 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:01.568 23:04:24 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:01.568 23:04:24 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:01.568 23:04:24 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:01.568 23:04:24 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:01.568 23:04:24 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:01.568 23:04:24 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:01.568 23:04:24 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:01.568 23:04:24 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:01.568 23:04:24 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:01.568 23:04:24 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:01.568 23:04:24 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:01.568 23:04:24 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:01.568 23:04:24 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:01.568 23:04:24 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:01.568 23:04:24 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:01.568 23:04:24 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:01.568 23:04:24 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:01.568 23:04:24 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:01.568 23:04:24 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:01.568 23:04:24 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:01.568 23:04:24 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:01.568 23:04:24 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:01.568 23:04:24 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:01.568 23:04:24 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:01.568 23:04:24 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:01.568 23:04:24 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:01.568 23:04:24 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:01.568 23:04:24 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:01.568 23:04:24 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:01.568 23:04:24 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:01.568 23:04:24 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:01.568 23:04:24 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:01.568 23:04:24 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:01.568 23:04:24 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:01.568 23:04:24 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:01.568 23:04:24 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:01.568 23:04:24 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:01.568 23:04:24 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:01.568 23:04:24 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:01.568 23:04:24 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:01.568 23:04:24 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:01.568 23:04:24 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:01.568 23:04:24 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:01.568 23:04:24 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:01.568 23:04:24 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:01.568 23:04:24 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:01.568 23:04:24 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:01.568 23:04:24 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:01.568 23:04:24 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:01.568 23:04:24 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:01.568 23:04:24 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:01.568 23:04:24 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:01.568 23:04:24 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:01.568 23:04:24 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:01.568 23:04:24 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:01.568 23:04:24 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:01.568 23:04:24 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:01.568 23:04:24 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:01.568 23:04:24 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:01.568 23:04:24 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:01.568 23:04:24 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:01.568 23:04:24 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:01.568 23:04:24 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:01.568 23:04:24 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:01.568 23:04:24 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:01.568 23:04:24 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:01.568 23:04:24 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:01.568 23:04:24 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:01.568 23:04:24 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:01.568 23:04:24 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:01.568 23:04:24 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:01.568 23:04:24 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:01.568 23:04:24 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:01.568 23:04:24 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:01.568 23:04:24 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:01.568 23:04:24 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:01.568 23:04:24 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:01.568 23:04:24 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:01.568 23:04:24 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:01.568 23:04:24 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:01.568 23:04:24 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:01.568 23:04:24 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:01.568 23:04:24 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:01.568 23:04:24 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:01.568 23:04:24 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:01.568 23:04:24 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:01.568 23:04:24 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:01.568 23:04:24 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:01.568 23:04:24 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:01.568 23:04:24 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:01.568 23:04:24 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:01.568 23:04:24 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:01.569 23:04:24 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:01.569 23:04:24 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:01.569 #define SPDK_CONFIG_H 00:08:01.569 #define SPDK_CONFIG_APPS 1 00:08:01.569 #define SPDK_CONFIG_ARCH native 00:08:01.569 #undef SPDK_CONFIG_ASAN 00:08:01.569 #undef SPDK_CONFIG_AVAHI 00:08:01.569 #undef SPDK_CONFIG_CET 00:08:01.569 #define SPDK_CONFIG_COVERAGE 1 00:08:01.569 #define SPDK_CONFIG_CROSS_PREFIX 00:08:01.569 #undef SPDK_CONFIG_CRYPTO 00:08:01.569 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:01.569 #undef SPDK_CONFIG_CUSTOMOCF 00:08:01.569 #undef SPDK_CONFIG_DAOS 00:08:01.569 #define SPDK_CONFIG_DAOS_DIR 00:08:01.569 #define SPDK_CONFIG_DEBUG 1 00:08:01.569 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:01.569 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:01.569 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:01.569 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:01.569 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:01.569 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:01.569 #define SPDK_CONFIG_EXAMPLES 1 00:08:01.569 #undef SPDK_CONFIG_FC 00:08:01.569 #define SPDK_CONFIG_FC_PATH 00:08:01.569 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:01.569 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:01.569 #undef SPDK_CONFIG_FUSE 00:08:01.569 #undef SPDK_CONFIG_FUZZER 00:08:01.569 #define SPDK_CONFIG_FUZZER_LIB 00:08:01.569 #undef SPDK_CONFIG_GOLANG 00:08:01.569 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:01.569 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:01.569 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:01.569 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:01.569 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:01.569 #define SPDK_CONFIG_IDXD 1 00:08:01.569 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:01.569 #undef SPDK_CONFIG_IPSEC_MB 00:08:01.569 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:01.569 #define SPDK_CONFIG_ISAL 1 00:08:01.569 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:01.569 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:01.569 #define SPDK_CONFIG_LIBDIR 00:08:01.569 #undef SPDK_CONFIG_LTO 00:08:01.569 #define SPDK_CONFIG_MAX_LCORES 00:08:01.569 #define SPDK_CONFIG_NVME_CUSE 1 00:08:01.569 #undef SPDK_CONFIG_OCF 00:08:01.569 #define SPDK_CONFIG_OCF_PATH 00:08:01.569 #define SPDK_CONFIG_OPENSSL_PATH 00:08:01.569 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:01.569 #undef SPDK_CONFIG_PGO_USE 00:08:01.569 #define SPDK_CONFIG_PREFIX /usr/local 00:08:01.569 #undef SPDK_CONFIG_RAID5F 00:08:01.569 #undef SPDK_CONFIG_RBD 00:08:01.569 #define SPDK_CONFIG_RDMA 1 00:08:01.569 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:01.569 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:01.569 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:01.569 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:01.569 #define SPDK_CONFIG_SHARED 1 00:08:01.569 #undef SPDK_CONFIG_SMA 00:08:01.569 #define SPDK_CONFIG_TESTS 1 00:08:01.569 #undef SPDK_CONFIG_TSAN 00:08:01.569 #define SPDK_CONFIG_UBLK 1 00:08:01.569 #define SPDK_CONFIG_UBSAN 1 00:08:01.569 #undef SPDK_CONFIG_UNIT_TESTS 00:08:01.569 #undef SPDK_CONFIG_URING 00:08:01.569 #define SPDK_CONFIG_URING_PATH 00:08:01.569 #undef SPDK_CONFIG_URING_ZNS 00:08:01.569 #undef SPDK_CONFIG_USDT 00:08:01.569 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:01.569 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:01.569 #define SPDK_CONFIG_VFIO_USER 1 00:08:01.569 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:01.569 #define SPDK_CONFIG_VHOST 1 00:08:01.569 #define SPDK_CONFIG_VIRTIO 1 00:08:01.569 #undef SPDK_CONFIG_VTUNE 00:08:01.569 #define SPDK_CONFIG_VTUNE_DIR 00:08:01.569 #define SPDK_CONFIG_WERROR 1 00:08:01.569 #define SPDK_CONFIG_WPDK_DIR 00:08:01.569 #undef SPDK_CONFIG_XNVME 00:08:01.569 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:01.569 23:04:24 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:01.569 23:04:24 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.569 23:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.569 23:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.569 23:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.569 23:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.569 23:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.569 23:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.569 23:04:24 -- paths/export.sh@5 -- # export PATH 00:08:01.569 23:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.569 23:04:24 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.569 23:04:24 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:01.569 23:04:24 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:01.569 23:04:24 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:01.569 23:04:24 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:01.569 23:04:24 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:01.569 23:04:24 -- pm/common@16 -- # TEST_TAG=N/A 00:08:01.569 23:04:24 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:01.569 23:04:24 -- common/autotest_common.sh@52 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:01.569 23:04:24 -- common/autotest_common.sh@56 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:01.569 23:04:24 -- common/autotest_common.sh@58 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:01.569 23:04:24 -- common/autotest_common.sh@60 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:01.569 23:04:24 -- common/autotest_common.sh@62 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:01.569 23:04:24 -- common/autotest_common.sh@64 -- # : 00:08:01.569 23:04:24 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:01.569 23:04:24 -- common/autotest_common.sh@66 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:01.569 23:04:24 -- common/autotest_common.sh@68 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:01.569 23:04:24 -- common/autotest_common.sh@70 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:01.569 23:04:24 -- common/autotest_common.sh@72 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:01.569 23:04:24 -- common/autotest_common.sh@74 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:01.569 23:04:24 -- common/autotest_common.sh@76 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:01.569 23:04:24 -- common/autotest_common.sh@78 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:01.569 23:04:24 -- common/autotest_common.sh@80 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:01.569 23:04:24 -- common/autotest_common.sh@82 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:01.569 23:04:24 -- common/autotest_common.sh@84 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:01.569 23:04:24 -- common/autotest_common.sh@86 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:01.569 23:04:24 -- common/autotest_common.sh@88 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:01.569 23:04:24 -- common/autotest_common.sh@90 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:01.569 23:04:24 -- common/autotest_common.sh@92 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:01.569 23:04:24 -- common/autotest_common.sh@94 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:01.569 23:04:24 -- common/autotest_common.sh@96 -- # : tcp 00:08:01.569 23:04:24 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:01.569 23:04:24 -- common/autotest_common.sh@98 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:01.569 23:04:24 -- common/autotest_common.sh@100 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:01.569 23:04:24 -- common/autotest_common.sh@102 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:01.569 23:04:24 -- common/autotest_common.sh@104 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:01.569 23:04:24 -- common/autotest_common.sh@106 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:01.569 23:04:24 -- common/autotest_common.sh@108 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:01.569 23:04:24 -- common/autotest_common.sh@110 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:01.569 23:04:24 -- common/autotest_common.sh@112 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:01.569 23:04:24 -- common/autotest_common.sh@114 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:01.569 23:04:24 -- common/autotest_common.sh@116 -- # : 1 00:08:01.569 23:04:24 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:01.569 23:04:24 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:01.569 23:04:24 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:01.569 23:04:24 -- common/autotest_common.sh@120 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:01.569 23:04:24 -- common/autotest_common.sh@122 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:01.569 23:04:24 -- common/autotest_common.sh@124 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:01.569 23:04:24 -- common/autotest_common.sh@126 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:01.569 23:04:24 -- common/autotest_common.sh@128 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:01.569 23:04:24 -- common/autotest_common.sh@130 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:01.569 23:04:24 -- common/autotest_common.sh@132 -- # : v23.11 00:08:01.569 23:04:24 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:01.569 23:04:24 -- common/autotest_common.sh@134 -- # : true 00:08:01.569 23:04:24 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:01.569 23:04:24 -- common/autotest_common.sh@136 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:01.569 23:04:24 -- common/autotest_common.sh@138 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:01.569 23:04:24 -- common/autotest_common.sh@140 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:01.569 23:04:24 -- common/autotest_common.sh@142 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:01.569 23:04:24 -- common/autotest_common.sh@144 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:01.569 23:04:24 -- common/autotest_common.sh@146 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:01.569 23:04:24 -- common/autotest_common.sh@148 -- # : e810 00:08:01.569 23:04:24 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:01.569 23:04:24 -- common/autotest_common.sh@150 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:01.569 23:04:24 -- common/autotest_common.sh@152 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:01.569 23:04:24 -- common/autotest_common.sh@154 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:01.569 23:04:24 -- common/autotest_common.sh@156 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:01.569 23:04:24 -- common/autotest_common.sh@158 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:01.569 23:04:24 -- common/autotest_common.sh@160 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:01.569 23:04:24 -- common/autotest_common.sh@163 -- # : 00:08:01.569 23:04:24 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:01.569 23:04:24 -- common/autotest_common.sh@165 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:01.569 23:04:24 -- common/autotest_common.sh@167 -- # : 0 00:08:01.569 23:04:24 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:01.569 23:04:24 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:01.569 23:04:24 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:01.569 23:04:24 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:01.569 23:04:24 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:01.569 23:04:24 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.569 23:04:24 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.570 23:04:24 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.570 23:04:24 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:01.570 23:04:24 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:01.570 23:04:24 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:01.570 23:04:24 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:01.570 23:04:24 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:01.570 23:04:24 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.570 23:04:24 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:01.570 23:04:24 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.570 23:04:24 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:01.570 23:04:24 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:01.570 23:04:24 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:01.570 23:04:24 -- common/autotest_common.sh@196 -- # cat 00:08:01.570 23:04:24 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:01.570 23:04:24 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.570 23:04:24 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:01.570 23:04:24 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.570 23:04:24 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:01.570 23:04:24 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:01.570 23:04:24 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:01.570 23:04:24 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:01.570 23:04:24 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:01.570 23:04:24 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:01.570 23:04:24 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:01.570 23:04:24 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.570 23:04:24 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:01.570 23:04:24 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.570 23:04:24 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:01.570 23:04:24 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.570 23:04:24 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:01.570 23:04:24 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:01.570 23:04:24 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:01.570 23:04:24 -- common/autotest_common.sh@249 -- # valgrind= 00:08:01.570 23:04:24 -- common/autotest_common.sh@255 -- # uname -s 00:08:01.570 23:04:24 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:01.570 23:04:24 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:01.570 23:04:24 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:01.570 23:04:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:01.570 23:04:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:01.570 23:04:24 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:01.570 23:04:24 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:08:01.570 23:04:24 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:01.570 23:04:24 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:01.570 23:04:24 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:01.570 23:04:24 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:01.570 23:04:24 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:01.570 23:04:24 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:01.570 23:04:24 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:01.570 23:04:24 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:01.570 23:04:24 -- common/autotest_common.sh@309 -- # [[ -z 2645865 ]] 00:08:01.570 23:04:24 -- common/autotest_common.sh@309 -- # kill -0 2645865 00:08:01.570 23:04:24 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:01.570 23:04:24 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:01.570 23:04:24 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:01.570 23:04:24 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:01.570 23:04:24 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:01.570 23:04:24 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:01.570 23:04:24 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:01.832 23:04:24 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:01.832 23:04:24 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.vBSQxb 00:08:01.832 23:04:24 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:01.832 23:04:24 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:01.832 23:04:24 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:01.832 23:04:24 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vBSQxb/tests/target /tmp/spdk.vBSQxb 00:08:01.832 23:04:24 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:01.832 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.832 23:04:24 -- common/autotest_common.sh@318 -- # df -T 00:08:01.832 23:04:24 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:01.832 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:01.832 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:01.832 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:01.832 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:01.832 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:01.832 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.832 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:01.832 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:01.832 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=957403136 00:08:01.832 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:01.832 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327026688 00:08:01.832 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=121383198720 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370996736 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=7987798016 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=64682905600 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685498368 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864454144 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874202624 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=9748480 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=179200 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=324608 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=64684400640 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685498368 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=1097728 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:08:01.833 23:04:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:08:01.833 23:04:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:01.833 23:04:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:01.833 23:04:24 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:01.833 * Looking for test storage... 00:08:01.833 23:04:24 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:01.833 23:04:24 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:01.833 23:04:24 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.833 23:04:24 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:01.833 23:04:24 -- common/autotest_common.sh@363 -- # mount=/ 00:08:01.833 23:04:24 -- common/autotest_common.sh@365 -- # target_space=121383198720 00:08:01.833 23:04:24 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:01.833 23:04:24 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:01.833 23:04:24 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:01.833 23:04:24 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:01.833 23:04:24 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:01.833 23:04:24 -- common/autotest_common.sh@372 -- # new_size=10202390528 00:08:01.833 23:04:24 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:01.833 23:04:24 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.833 23:04:24 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.833 23:04:24 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.833 23:04:24 -- common/autotest_common.sh@380 -- # return 0 00:08:01.833 23:04:24 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:01.833 23:04:24 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:01.833 23:04:24 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:01.833 23:04:24 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:01.833 23:04:24 -- common/autotest_common.sh@1672 -- # true 00:08:01.833 23:04:24 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:01.833 23:04:24 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:01.833 23:04:24 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:01.833 23:04:24 -- common/autotest_common.sh@27 -- # exec 00:08:01.833 23:04:24 -- common/autotest_common.sh@29 -- # exec 00:08:01.833 23:04:24 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:01.833 23:04:24 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:01.833 23:04:24 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:01.833 23:04:24 -- common/autotest_common.sh@18 -- # set -x 00:08:01.833 23:04:24 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.833 23:04:24 -- nvmf/common.sh@7 -- # uname -s 00:08:01.833 23:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.833 23:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.833 23:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.833 23:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.833 23:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.833 23:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.833 23:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.833 23:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.833 23:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.833 23:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.833 23:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:01.833 23:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:01.833 23:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.833 23:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.833 23:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.833 23:04:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.833 23:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.833 23:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.833 23:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.833 23:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.833 23:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.833 23:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.833 23:04:24 -- paths/export.sh@5 -- # export PATH 00:08:01.833 23:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.833 23:04:24 -- nvmf/common.sh@46 -- # : 0 00:08:01.833 23:04:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:01.833 23:04:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:01.833 23:04:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:01.833 23:04:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.833 23:04:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.833 23:04:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:01.833 23:04:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:01.833 23:04:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:01.833 23:04:24 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:01.833 23:04:24 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:01.833 23:04:24 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:01.833 23:04:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:01.833 23:04:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.834 23:04:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:01.834 23:04:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:01.834 23:04:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:01.834 23:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.834 23:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.834 23:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.834 23:04:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:01.834 23:04:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:01.834 23:04:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:01.834 23:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:09.977 23:04:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:09.977 23:04:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:09.977 23:04:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:09.977 23:04:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:09.977 23:04:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:09.977 23:04:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:09.977 23:04:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:09.977 23:04:31 -- nvmf/common.sh@294 -- # net_devs=() 00:08:09.977 23:04:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:09.977 23:04:31 -- nvmf/common.sh@295 -- # e810=() 00:08:09.977 23:04:31 -- nvmf/common.sh@295 -- # local -ga e810 00:08:09.977 23:04:31 -- nvmf/common.sh@296 -- # x722=() 00:08:09.977 23:04:31 -- nvmf/common.sh@296 -- # local -ga x722 00:08:09.977 23:04:31 -- nvmf/common.sh@297 -- # mlx=() 00:08:09.977 23:04:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:09.977 23:04:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.977 23:04:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:09.977 23:04:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:09.977 23:04:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:09.977 23:04:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:09.977 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:09.977 23:04:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:09.977 23:04:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:09.977 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:09.977 23:04:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:09.977 23:04:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.977 23:04:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.977 23:04:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:09.977 Found net devices under 0000:31:00.0: cvl_0_0 00:08:09.977 23:04:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.977 23:04:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:09.977 23:04:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.977 23:04:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.977 23:04:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:09.977 Found net devices under 0000:31:00.1: cvl_0_1 00:08:09.977 23:04:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.977 23:04:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:09.977 23:04:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:09.977 23:04:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:09.977 23:04:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.977 23:04:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.977 23:04:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.977 23:04:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:09.977 23:04:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.977 23:04:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.977 23:04:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:09.977 23:04:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.977 23:04:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.977 23:04:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:09.977 23:04:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:09.977 23:04:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.977 23:04:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.977 23:04:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.977 23:04:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.978 23:04:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:09.978 23:04:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.978 23:04:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.978 23:04:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.978 23:04:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:09.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:08:09.978 00:08:09.978 --- 10.0.0.2 ping statistics --- 00:08:09.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.978 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:08:09.978 23:04:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:08:09.978 00:08:09.978 --- 10.0.0.1 ping statistics --- 00:08:09.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.978 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:09.978 23:04:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.978 23:04:31 -- nvmf/common.sh@410 -- # return 0 00:08:09.978 23:04:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:09.978 23:04:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.978 23:04:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:09.978 23:04:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:09.978 23:04:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.978 23:04:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:09.978 23:04:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:09.978 23:04:31 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:09.978 23:04:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.978 23:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.978 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 ************************************ 00:08:09.978 START TEST nvmf_filesystem_no_in_capsule 00:08:09.978 ************************************ 00:08:09.978 23:04:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:09.978 23:04:31 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:09.978 23:04:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.978 23:04:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:09.978 23:04:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:09.978 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 23:04:31 -- nvmf/common.sh@469 -- # nvmfpid=2649718 00:08:09.978 23:04:31 -- nvmf/common.sh@470 -- # waitforlisten 2649718 00:08:09.978 23:04:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.978 23:04:31 -- common/autotest_common.sh@819 -- # '[' -z 2649718 ']' 00:08:09.978 23:04:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.978 23:04:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:09.978 23:04:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.978 23:04:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:09.978 23:04:31 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 [2024-06-07 23:04:31.693925] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:09.978 [2024-06-07 23:04:31.693974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.978 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.978 [2024-06-07 23:04:31.752639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.978 [2024-06-07 23:04:31.785140] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.978 [2024-06-07 23:04:31.785266] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.978 [2024-06-07 23:04:31.785276] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.978 [2024-06-07 23:04:31.785282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.978 [2024-06-07 23:04:31.785348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.978 [2024-06-07 23:04:31.785466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.978 [2024-06-07 23:04:31.785621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.978 [2024-06-07 23:04:31.785623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.978 23:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:09.978 23:04:32 -- common/autotest_common.sh@852 -- # return 0 00:08:09.978 23:04:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:09.978 23:04:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 23:04:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.978 23:04:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.978 23:04:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 [2024-06-07 23:04:32.492542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 [2024-06-07 23:04:32.619442] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:09.978 23:04:32 -- common/autotest_common.sh@1359 -- # local bs 00:08:09.978 23:04:32 -- common/autotest_common.sh@1360 -- # local nb 00:08:09.978 23:04:32 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.978 23:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.978 23:04:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.978 23:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.978 23:04:32 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:09.978 { 00:08:09.978 "name": "Malloc1", 00:08:09.978 "aliases": [ 00:08:09.978 "2ba53c29-bbe8-4b3b-8ce5-287abc0728a4" 00:08:09.978 ], 00:08:09.978 "product_name": "Malloc disk", 00:08:09.978 "block_size": 512, 00:08:09.978 "num_blocks": 1048576, 00:08:09.978 "uuid": "2ba53c29-bbe8-4b3b-8ce5-287abc0728a4", 00:08:09.978 "assigned_rate_limits": { 00:08:09.978 "rw_ios_per_sec": 0, 00:08:09.978 "rw_mbytes_per_sec": 0, 00:08:09.978 "r_mbytes_per_sec": 0, 00:08:09.978 "w_mbytes_per_sec": 0 00:08:09.978 }, 00:08:09.978 "claimed": true, 00:08:09.978 "claim_type": "exclusive_write", 00:08:09.978 "zoned": false, 00:08:09.978 "supported_io_types": { 00:08:09.978 "read": true, 00:08:09.978 "write": true, 00:08:09.978 "unmap": true, 00:08:09.978 "write_zeroes": true, 00:08:09.978 "flush": true, 00:08:09.978 "reset": true, 00:08:09.978 "compare": false, 00:08:09.978 "compare_and_write": false, 00:08:09.978 "abort": true, 00:08:09.978 "nvme_admin": false, 00:08:09.978 "nvme_io": false 00:08:09.978 }, 00:08:09.978 "memory_domains": [ 00:08:09.978 { 00:08:09.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.978 "dma_device_type": 2 00:08:09.978 } 00:08:09.978 ], 00:08:09.978 "driver_specific": {} 00:08:09.978 } 00:08:09.978 ]' 00:08:09.978 23:04:32 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:10.240 23:04:32 -- common/autotest_common.sh@1362 -- # bs=512 00:08:10.240 23:04:32 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:10.240 23:04:32 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:10.240 23:04:32 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:10.240 23:04:32 -- common/autotest_common.sh@1367 -- # echo 512 00:08:10.240 23:04:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:10.240 23:04:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.633 23:04:34 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.633 23:04:34 -- common/autotest_common.sh@1177 -- # local i=0 00:08:11.633 23:04:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.633 23:04:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:11.633 23:04:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:14.258 23:04:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:14.258 23:04:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:14.258 23:04:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.258 23:04:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:14.258 23:04:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.258 23:04:36 -- common/autotest_common.sh@1187 -- # return 0 00:08:14.258 23:04:36 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.258 23:04:36 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.258 23:04:36 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.258 23:04:36 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.258 23:04:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.258 23:04:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.258 23:04:36 -- setup/common.sh@80 -- # echo 536870912 00:08:14.258 23:04:36 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.258 23:04:36 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.258 23:04:36 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.258 23:04:36 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.258 23:04:36 -- target/filesystem.sh@69 -- # partprobe 00:08:14.258 23:04:36 -- target/filesystem.sh@70 -- # sleep 1 00:08:15.198 23:04:37 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:15.198 23:04:37 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.198 23:04:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:15.198 23:04:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.198 23:04:37 -- common/autotest_common.sh@10 -- # set +x 00:08:15.198 ************************************ 00:08:15.198 START TEST filesystem_ext4 00:08:15.198 ************************************ 00:08:15.198 23:04:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.198 23:04:37 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.198 23:04:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.198 23:04:37 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.198 23:04:37 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:15.198 23:04:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:15.198 23:04:37 -- common/autotest_common.sh@904 -- # local i=0 00:08:15.198 23:04:37 -- common/autotest_common.sh@905 -- # local force 00:08:15.198 23:04:37 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:15.198 23:04:37 -- common/autotest_common.sh@908 -- # force=-F 00:08:15.198 23:04:37 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.198 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.458 Discarding device blocks: 0/522240 done 00:08:15.458 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:15.458 Filesystem UUID: dd2613c8-a4fa-4ea9-9f4e-7b3e6815f41d 00:08:15.458 Superblock backups stored on blocks: 00:08:15.458 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:15.458 00:08:15.458 Allocating group tables: 0/64 done 00:08:15.458 Writing inode tables: 0/64 done 00:08:18.755 Creating journal (8192 blocks): done 00:08:19.325 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:19.325 00:08:19.325 23:04:41 -- common/autotest_common.sh@921 -- # return 0 00:08:19.325 23:04:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.894 23:04:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.894 23:04:42 -- target/filesystem.sh@25 -- # sync 00:08:19.894 23:04:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.894 23:04:42 -- target/filesystem.sh@27 -- # sync 00:08:19.894 23:04:42 -- target/filesystem.sh@29 -- # i=0 00:08:19.894 23:04:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.894 23:04:42 -- target/filesystem.sh@37 -- # kill -0 2649718 00:08:19.894 23:04:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.894 23:04:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.153 23:04:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.153 23:04:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.153 00:08:20.153 real 0m4.758s 00:08:20.153 user 0m0.026s 00:08:20.153 sys 0m0.053s 00:08:20.153 23:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.153 23:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:20.153 ************************************ 00:08:20.153 END TEST filesystem_ext4 00:08:20.153 ************************************ 00:08:20.153 23:04:42 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.153 23:04:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.153 23:04:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.154 23:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:20.154 ************************************ 00:08:20.154 START TEST filesystem_btrfs 00:08:20.154 ************************************ 00:08:20.154 23:04:42 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.154 23:04:42 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.154 23:04:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.154 23:04:42 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.154 23:04:42 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:20.154 23:04:42 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.154 23:04:42 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.154 23:04:42 -- common/autotest_common.sh@905 -- # local force 00:08:20.154 23:04:42 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:20.154 23:04:42 -- common/autotest_common.sh@910 -- # force=-f 00:08:20.154 23:04:42 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.413 btrfs-progs v6.6.2 00:08:20.413 See https://btrfs.readthedocs.io for more information. 00:08:20.413 00:08:20.413 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.413 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.413 this does not affect your deployments: 00:08:20.413 - DUP for metadata (-m dup) 00:08:20.413 - enabled no-holes (-O no-holes) 00:08:20.413 - enabled free-space-tree (-R free-space-tree) 00:08:20.413 00:08:20.413 Label: (null) 00:08:20.413 UUID: 86b3de82-34b0-4d8c-ab11-10ffa056f82c 00:08:20.413 Node size: 16384 00:08:20.413 Sector size: 4096 00:08:20.413 Filesystem size: 510.00MiB 00:08:20.413 Block group profiles: 00:08:20.413 Data: single 8.00MiB 00:08:20.413 Metadata: DUP 32.00MiB 00:08:20.413 System: DUP 8.00MiB 00:08:20.413 SSD detected: yes 00:08:20.413 Zoned device: no 00:08:20.413 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.413 Runtime features: free-space-tree 00:08:20.413 Checksum: crc32c 00:08:20.413 Number of devices: 1 00:08:20.413 Devices: 00:08:20.413 ID SIZE PATH 00:08:20.413 1 510.00MiB /dev/nvme0n1p1 00:08:20.413 00:08:20.413 23:04:43 -- common/autotest_common.sh@921 -- # return 0 00:08:20.413 23:04:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.673 23:04:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.673 23:04:43 -- target/filesystem.sh@25 -- # sync 00:08:20.673 23:04:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.673 23:04:43 -- target/filesystem.sh@27 -- # sync 00:08:20.673 23:04:43 -- target/filesystem.sh@29 -- # i=0 00:08:20.673 23:04:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.673 23:04:43 -- target/filesystem.sh@37 -- # kill -0 2649718 00:08:20.673 23:04:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.673 23:04:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.673 23:04:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.673 23:04:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.673 00:08:20.673 real 0m0.644s 00:08:20.673 user 0m0.030s 00:08:20.673 sys 0m0.060s 00:08:20.673 23:04:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.673 23:04:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.673 ************************************ 00:08:20.673 END TEST filesystem_btrfs 00:08:20.673 ************************************ 00:08:20.673 23:04:43 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:20.673 23:04:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.673 23:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.673 23:04:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.673 ************************************ 00:08:20.673 START TEST filesystem_xfs 00:08:20.673 ************************************ 00:08:20.673 23:04:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:20.673 23:04:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:20.673 23:04:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.673 23:04:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:20.673 23:04:43 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:20.673 23:04:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.673 23:04:43 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.673 23:04:43 -- common/autotest_common.sh@905 -- # local force 00:08:20.673 23:04:43 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:20.673 23:04:43 -- common/autotest_common.sh@910 -- # force=-f 00:08:20.673 23:04:43 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:20.932 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:20.932 = sectsz=512 attr=2, projid32bit=1 00:08:20.932 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:20.932 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:20.932 data = bsize=4096 blocks=130560, imaxpct=25 00:08:20.932 = sunit=0 swidth=0 blks 00:08:20.932 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:20.932 log =internal log bsize=4096 blocks=16384, version=2 00:08:20.932 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:20.932 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:21.501 Discarding blocks...Done. 00:08:21.501 23:04:44 -- common/autotest_common.sh@921 -- # return 0 00:08:21.501 23:04:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.410 23:04:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.410 23:04:45 -- target/filesystem.sh@25 -- # sync 00:08:23.410 23:04:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.410 23:04:45 -- target/filesystem.sh@27 -- # sync 00:08:23.410 23:04:45 -- target/filesystem.sh@29 -- # i=0 00:08:23.410 23:04:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.410 23:04:46 -- target/filesystem.sh@37 -- # kill -0 2649718 00:08:23.410 23:04:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.410 23:04:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.410 23:04:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.410 23:04:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.410 00:08:23.410 real 0m2.716s 00:08:23.410 user 0m0.021s 00:08:23.410 sys 0m0.057s 00:08:23.410 23:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.410 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.410 ************************************ 00:08:23.410 END TEST filesystem_xfs 00:08:23.410 ************************************ 00:08:23.411 23:04:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.671 23:04:46 -- target/filesystem.sh@93 -- # sync 00:08:23.671 23:04:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.671 23:04:46 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.671 23:04:46 -- common/autotest_common.sh@1198 -- # local i=0 00:08:23.671 23:04:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:23.671 23:04:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.671 23:04:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:23.671 23:04:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.671 23:04:46 -- common/autotest_common.sh@1210 -- # return 0 00:08:23.671 23:04:46 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.671 23:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.671 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.671 23:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.671 23:04:46 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.671 23:04:46 -- target/filesystem.sh@101 -- # killprocess 2649718 00:08:23.671 23:04:46 -- common/autotest_common.sh@926 -- # '[' -z 2649718 ']' 00:08:23.671 23:04:46 -- common/autotest_common.sh@930 -- # kill -0 2649718 00:08:23.671 23:04:46 -- common/autotest_common.sh@931 -- # uname 00:08:23.671 23:04:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.671 23:04:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2649718 00:08:23.671 23:04:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.671 23:04:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.671 23:04:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2649718' 00:08:23.671 killing process with pid 2649718 00:08:23.671 23:04:46 -- common/autotest_common.sh@945 -- # kill 2649718 00:08:23.671 23:04:46 -- common/autotest_common.sh@950 -- # wait 2649718 00:08:23.931 23:04:46 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.931 00:08:23.931 real 0m14.847s 00:08:23.931 user 0m58.768s 00:08:23.931 sys 0m1.005s 00:08:23.931 23:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.931 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 ************************************ 00:08:23.931 END TEST nvmf_filesystem_no_in_capsule 00:08:23.931 ************************************ 00:08:23.931 23:04:46 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:23.931 23:04:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:23.931 23:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.931 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 ************************************ 00:08:23.931 START TEST nvmf_filesystem_in_capsule 00:08:23.931 ************************************ 00:08:23.931 23:04:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:23.931 23:04:46 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:23.931 23:04:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:23.931 23:04:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:23.931 23:04:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:23.931 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 23:04:46 -- nvmf/common.sh@469 -- # nvmfpid=2652854 00:08:23.931 23:04:46 -- nvmf/common.sh@470 -- # waitforlisten 2652854 00:08:23.931 23:04:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.931 23:04:46 -- common/autotest_common.sh@819 -- # '[' -z 2652854 ']' 00:08:23.931 23:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.931 23:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:23.931 23:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.931 23:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:23.931 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:23.931 [2024-06-07 23:04:46.590224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:23.931 [2024-06-07 23:04:46.590293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.192 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.192 [2024-06-07 23:04:46.655176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.192 [2024-06-07 23:04:46.685754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.192 [2024-06-07 23:04:46.685888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.192 [2024-06-07 23:04:46.685898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.192 [2024-06-07 23:04:46.685906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.192 [2024-06-07 23:04:46.686043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.192 [2024-06-07 23:04:46.686165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.192 [2024-06-07 23:04:46.686330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.192 [2024-06-07 23:04:46.686461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.763 23:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.763 23:04:47 -- common/autotest_common.sh@852 -- # return 0 00:08:24.763 23:04:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:24.763 23:04:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:24.763 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:24.763 23:04:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.763 23:04:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:24.763 23:04:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:24.763 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.763 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:24.763 [2024-06-07 23:04:47.404520] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.763 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.763 23:04:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:24.763 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.763 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.023 Malloc1 00:08:25.023 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.023 23:04:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.023 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.023 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.023 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.023 23:04:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:25.023 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.023 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.023 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.023 23:04:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.023 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.023 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.023 [2024-06-07 23:04:47.529184] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.023 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.023 23:04:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:25.023 23:04:47 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:25.023 23:04:47 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:25.023 23:04:47 -- common/autotest_common.sh@1359 -- # local bs 00:08:25.023 23:04:47 -- common/autotest_common.sh@1360 -- # local nb 00:08:25.023 23:04:47 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:25.023 23:04:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.023 23:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.023 23:04:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.023 23:04:47 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:25.023 { 00:08:25.023 "name": "Malloc1", 00:08:25.023 "aliases": [ 00:08:25.023 "25ae8fd5-ad05-46cb-a24d-6f23b741bf8d" 00:08:25.023 ], 00:08:25.023 "product_name": "Malloc disk", 00:08:25.023 "block_size": 512, 00:08:25.023 "num_blocks": 1048576, 00:08:25.023 "uuid": "25ae8fd5-ad05-46cb-a24d-6f23b741bf8d", 00:08:25.023 "assigned_rate_limits": { 00:08:25.023 "rw_ios_per_sec": 0, 00:08:25.023 "rw_mbytes_per_sec": 0, 00:08:25.023 "r_mbytes_per_sec": 0, 00:08:25.023 "w_mbytes_per_sec": 0 00:08:25.023 }, 00:08:25.023 "claimed": true, 00:08:25.023 "claim_type": "exclusive_write", 00:08:25.023 "zoned": false, 00:08:25.023 "supported_io_types": { 00:08:25.023 "read": true, 00:08:25.023 "write": true, 00:08:25.023 "unmap": true, 00:08:25.023 "write_zeroes": true, 00:08:25.023 "flush": true, 00:08:25.023 "reset": true, 00:08:25.023 "compare": false, 00:08:25.023 "compare_and_write": false, 00:08:25.023 "abort": true, 00:08:25.023 "nvme_admin": false, 00:08:25.023 "nvme_io": false 00:08:25.023 }, 00:08:25.023 "memory_domains": [ 00:08:25.023 { 00:08:25.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.023 "dma_device_type": 2 00:08:25.023 } 00:08:25.023 ], 00:08:25.023 "driver_specific": {} 00:08:25.023 } 00:08:25.023 ]' 00:08:25.023 23:04:47 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:25.023 23:04:47 -- common/autotest_common.sh@1362 -- # bs=512 00:08:25.023 23:04:47 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:25.023 23:04:47 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:25.023 23:04:47 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:25.023 23:04:47 -- common/autotest_common.sh@1367 -- # echo 512 00:08:25.023 23:04:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:25.023 23:04:47 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:26.933 23:04:49 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.933 23:04:49 -- common/autotest_common.sh@1177 -- # local i=0 00:08:26.933 23:04:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.933 23:04:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:26.933 23:04:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:28.845 23:04:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:28.845 23:04:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:28.845 23:04:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.845 23:04:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:28.845 23:04:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.845 23:04:51 -- common/autotest_common.sh@1187 -- # return 0 00:08:28.845 23:04:51 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:28.845 23:04:51 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:28.845 23:04:51 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:28.845 23:04:51 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:28.845 23:04:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:28.845 23:04:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:28.845 23:04:51 -- setup/common.sh@80 -- # echo 536870912 00:08:28.845 23:04:51 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:28.845 23:04:51 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:28.845 23:04:51 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:28.845 23:04:51 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:29.105 23:04:51 -- target/filesystem.sh@69 -- # partprobe 00:08:29.105 23:04:51 -- target/filesystem.sh@70 -- # sleep 1 00:08:30.490 23:04:52 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:30.490 23:04:52 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:30.490 23:04:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:30.490 23:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.490 23:04:52 -- common/autotest_common.sh@10 -- # set +x 00:08:30.490 ************************************ 00:08:30.490 START TEST filesystem_in_capsule_ext4 00:08:30.490 ************************************ 00:08:30.490 23:04:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:30.490 23:04:52 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:30.490 23:04:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.490 23:04:52 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:30.490 23:04:52 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:30.490 23:04:52 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:30.490 23:04:52 -- common/autotest_common.sh@904 -- # local i=0 00:08:30.490 23:04:52 -- common/autotest_common.sh@905 -- # local force 00:08:30.490 23:04:52 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:30.490 23:04:52 -- common/autotest_common.sh@908 -- # force=-F 00:08:30.490 23:04:52 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:30.490 mke2fs 1.46.5 (30-Dec-2021) 00:08:30.490 Discarding device blocks: 0/522240 done 00:08:30.490 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:30.490 Filesystem UUID: 4dc4997e-788a-46dc-8e6c-f052a5b76560 00:08:30.490 Superblock backups stored on blocks: 00:08:30.490 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:30.490 00:08:30.490 Allocating group tables: 0/64 done 00:08:30.490 Writing inode tables: 0/64 done 00:08:30.490 Creating journal (8192 blocks): done 00:08:31.427 Writing superblocks and filesystem accounting information: 0/64 done 00:08:31.427 00:08:31.427 23:04:53 -- common/autotest_common.sh@921 -- # return 0 00:08:31.427 23:04:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.687 23:04:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.687 23:04:54 -- target/filesystem.sh@25 -- # sync 00:08:31.687 23:04:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.687 23:04:54 -- target/filesystem.sh@27 -- # sync 00:08:31.687 23:04:54 -- target/filesystem.sh@29 -- # i=0 00:08:31.687 23:04:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.687 23:04:54 -- target/filesystem.sh@37 -- # kill -0 2652854 00:08:31.687 23:04:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.687 23:04:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.687 23:04:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.687 23:04:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.687 00:08:31.687 real 0m1.536s 00:08:31.687 user 0m0.027s 00:08:31.687 sys 0m0.047s 00:08:31.687 23:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.687 23:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:31.687 ************************************ 00:08:31.687 END TEST filesystem_in_capsule_ext4 00:08:31.687 ************************************ 00:08:31.687 23:04:54 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.687 23:04:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:31.687 23:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.687 23:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:31.687 ************************************ 00:08:31.687 START TEST filesystem_in_capsule_btrfs 00:08:31.687 ************************************ 00:08:31.687 23:04:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.687 23:04:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.687 23:04:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.687 23:04:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.687 23:04:54 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:31.687 23:04:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:31.687 23:04:54 -- common/autotest_common.sh@904 -- # local i=0 00:08:31.687 23:04:54 -- common/autotest_common.sh@905 -- # local force 00:08:31.687 23:04:54 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:31.687 23:04:54 -- common/autotest_common.sh@910 -- # force=-f 00:08:31.687 23:04:54 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:31.947 btrfs-progs v6.6.2 00:08:31.947 See https://btrfs.readthedocs.io for more information. 00:08:31.947 00:08:31.947 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:31.947 NOTE: several default settings have changed in version 5.15, please make sure 00:08:31.947 this does not affect your deployments: 00:08:31.947 - DUP for metadata (-m dup) 00:08:31.947 - enabled no-holes (-O no-holes) 00:08:31.947 - enabled free-space-tree (-R free-space-tree) 00:08:31.947 00:08:31.947 Label: (null) 00:08:31.947 UUID: 4dddb218-6322-4b88-9452-517a98948ca0 00:08:31.947 Node size: 16384 00:08:31.947 Sector size: 4096 00:08:31.947 Filesystem size: 510.00MiB 00:08:31.947 Block group profiles: 00:08:31.947 Data: single 8.00MiB 00:08:31.947 Metadata: DUP 32.00MiB 00:08:31.947 System: DUP 8.00MiB 00:08:31.947 SSD detected: yes 00:08:31.947 Zoned device: no 00:08:31.947 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:31.947 Runtime features: free-space-tree 00:08:31.947 Checksum: crc32c 00:08:31.947 Number of devices: 1 00:08:31.947 Devices: 00:08:31.947 ID SIZE PATH 00:08:31.947 1 510.00MiB /dev/nvme0n1p1 00:08:31.947 00:08:31.947 23:04:54 -- common/autotest_common.sh@921 -- # return 0 00:08:31.947 23:04:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.517 23:04:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.517 23:04:55 -- target/filesystem.sh@25 -- # sync 00:08:32.517 23:04:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.517 23:04:55 -- target/filesystem.sh@27 -- # sync 00:08:32.517 23:04:55 -- target/filesystem.sh@29 -- # i=0 00:08:32.517 23:04:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.517 23:04:55 -- target/filesystem.sh@37 -- # kill -0 2652854 00:08:32.517 23:04:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.517 23:04:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.517 23:04:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.517 23:04:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.517 00:08:32.517 real 0m0.860s 00:08:32.517 user 0m0.026s 00:08:32.517 sys 0m0.064s 00:08:32.517 23:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.517 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.517 ************************************ 00:08:32.517 END TEST filesystem_in_capsule_btrfs 00:08:32.517 ************************************ 00:08:32.778 23:04:55 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.778 23:04:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:32.778 23:04:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.778 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.778 ************************************ 00:08:32.778 START TEST filesystem_in_capsule_xfs 00:08:32.778 ************************************ 00:08:32.778 23:04:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.778 23:04:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.778 23:04:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.778 23:04:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.778 23:04:55 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:32.778 23:04:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:32.778 23:04:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:32.778 23:04:55 -- common/autotest_common.sh@905 -- # local force 00:08:32.778 23:04:55 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:32.778 23:04:55 -- common/autotest_common.sh@910 -- # force=-f 00:08:32.778 23:04:55 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.778 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.778 = sectsz=512 attr=2, projid32bit=1 00:08:32.778 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.778 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.778 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.778 = sunit=0 swidth=0 blks 00:08:32.778 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.778 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.778 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.778 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.720 Discarding blocks...Done. 00:08:33.720 23:04:56 -- common/autotest_common.sh@921 -- # return 0 00:08:33.720 23:04:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.631 23:04:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.631 23:04:58 -- target/filesystem.sh@25 -- # sync 00:08:35.631 23:04:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.631 23:04:58 -- target/filesystem.sh@27 -- # sync 00:08:35.893 23:04:58 -- target/filesystem.sh@29 -- # i=0 00:08:35.893 23:04:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.893 23:04:58 -- target/filesystem.sh@37 -- # kill -0 2652854 00:08:35.893 23:04:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.893 23:04:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.893 23:04:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.893 23:04:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.893 00:08:35.893 real 0m3.121s 00:08:35.893 user 0m0.030s 00:08:35.893 sys 0m0.048s 00:08:35.893 23:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.893 23:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:35.893 ************************************ 00:08:35.893 END TEST filesystem_in_capsule_xfs 00:08:35.893 ************************************ 00:08:35.893 23:04:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:35.893 23:04:58 -- target/filesystem.sh@93 -- # sync 00:08:35.893 23:04:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.893 23:04:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.893 23:04:58 -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.893 23:04:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:35.893 23:04:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.893 23:04:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:35.893 23:04:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.893 23:04:58 -- common/autotest_common.sh@1210 -- # return 0 00:08:35.893 23:04:58 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.893 23:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:35.893 23:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.153 23:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.153 23:04:58 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.153 23:04:58 -- target/filesystem.sh@101 -- # killprocess 2652854 00:08:36.153 23:04:58 -- common/autotest_common.sh@926 -- # '[' -z 2652854 ']' 00:08:36.153 23:04:58 -- common/autotest_common.sh@930 -- # kill -0 2652854 00:08:36.153 23:04:58 -- common/autotest_common.sh@931 -- # uname 00:08:36.153 23:04:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:36.153 23:04:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2652854 00:08:36.153 23:04:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:36.153 23:04:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:36.153 23:04:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2652854' 00:08:36.153 killing process with pid 2652854 00:08:36.153 23:04:58 -- common/autotest_common.sh@945 -- # kill 2652854 00:08:36.153 23:04:58 -- common/autotest_common.sh@950 -- # wait 2652854 00:08:36.414 23:04:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:36.414 00:08:36.414 real 0m12.320s 00:08:36.414 user 0m48.673s 00:08:36.414 sys 0m0.978s 00:08:36.414 23:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.414 23:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.414 ************************************ 00:08:36.414 END TEST nvmf_filesystem_in_capsule 00:08:36.414 ************************************ 00:08:36.414 23:04:58 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:36.414 23:04:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:36.414 23:04:58 -- nvmf/common.sh@116 -- # sync 00:08:36.414 23:04:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:36.414 23:04:58 -- nvmf/common.sh@119 -- # set +e 00:08:36.414 23:04:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:36.414 23:04:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:36.414 rmmod nvme_tcp 00:08:36.414 rmmod nvme_fabrics 00:08:36.414 rmmod nvme_keyring 00:08:36.414 23:04:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:36.414 23:04:58 -- nvmf/common.sh@123 -- # set -e 00:08:36.414 23:04:58 -- nvmf/common.sh@124 -- # return 0 00:08:36.414 23:04:58 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:36.414 23:04:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:36.414 23:04:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:36.414 23:04:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:36.414 23:04:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.414 23:04:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:36.414 23:04:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.414 23:04:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.414 23:04:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.956 23:05:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:38.956 00:08:38.956 real 0m36.986s 00:08:38.956 user 1m49.627s 00:08:38.956 sys 0m7.530s 00:08:38.956 23:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.956 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 ************************************ 00:08:38.956 END TEST nvmf_filesystem 00:08:38.956 ************************************ 00:08:38.956 23:05:01 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.956 23:05:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:38.956 23:05:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.956 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 ************************************ 00:08:38.956 START TEST nvmf_discovery 00:08:38.956 ************************************ 00:08:38.956 23:05:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.956 * Looking for test storage... 00:08:38.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.956 23:05:01 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.956 23:05:01 -- nvmf/common.sh@7 -- # uname -s 00:08:38.956 23:05:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.956 23:05:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.956 23:05:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.956 23:05:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.956 23:05:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.956 23:05:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.956 23:05:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.956 23:05:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.956 23:05:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.956 23:05:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.956 23:05:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.956 23:05:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:38.956 23:05:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.956 23:05:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.956 23:05:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.956 23:05:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.956 23:05:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.956 23:05:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.956 23:05:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.957 23:05:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.957 23:05:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.957 23:05:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.957 23:05:01 -- paths/export.sh@5 -- # export PATH 00:08:38.957 23:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.957 23:05:01 -- nvmf/common.sh@46 -- # : 0 00:08:38.957 23:05:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.957 23:05:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.957 23:05:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.957 23:05:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.957 23:05:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.957 23:05:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.957 23:05:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.957 23:05:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.957 23:05:01 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:38.957 23:05:01 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:38.957 23:05:01 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:38.957 23:05:01 -- target/discovery.sh@15 -- # hash nvme 00:08:38.957 23:05:01 -- target/discovery.sh@20 -- # nvmftestinit 00:08:38.957 23:05:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.957 23:05:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.957 23:05:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.957 23:05:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.957 23:05:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.957 23:05:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.957 23:05:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.957 23:05:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.957 23:05:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:38.957 23:05:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:38.957 23:05:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:38.957 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 23:05:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:45.646 23:05:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:45.646 23:05:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:45.646 23:05:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:45.646 23:05:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:45.646 23:05:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:45.646 23:05:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:45.646 23:05:07 -- nvmf/common.sh@294 -- # net_devs=() 00:08:45.646 23:05:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:45.646 23:05:07 -- nvmf/common.sh@295 -- # e810=() 00:08:45.646 23:05:07 -- nvmf/common.sh@295 -- # local -ga e810 00:08:45.646 23:05:07 -- nvmf/common.sh@296 -- # x722=() 00:08:45.646 23:05:07 -- nvmf/common.sh@296 -- # local -ga x722 00:08:45.646 23:05:07 -- nvmf/common.sh@297 -- # mlx=() 00:08:45.646 23:05:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:45.646 23:05:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.646 23:05:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:45.646 23:05:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:45.646 23:05:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.646 23:05:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:45.646 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:45.646 23:05:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:45.646 23:05:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:45.646 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:45.646 23:05:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.646 23:05:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.646 23:05:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.646 23:05:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:45.646 Found net devices under 0000:31:00.0: cvl_0_0 00:08:45.646 23:05:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.646 23:05:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:45.646 23:05:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.646 23:05:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.646 23:05:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:45.646 Found net devices under 0000:31:00.1: cvl_0_1 00:08:45.646 23:05:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.646 23:05:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:45.646 23:05:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:45.646 23:05:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:45.646 23:05:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.646 23:05:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.646 23:05:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.646 23:05:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:45.646 23:05:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.646 23:05:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.646 23:05:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:45.646 23:05:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.646 23:05:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.646 23:05:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:45.646 23:05:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:45.646 23:05:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.646 23:05:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.646 23:05:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.646 23:05:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.646 23:05:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:45.646 23:05:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.646 23:05:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.646 23:05:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.646 23:05:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:45.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:08:45.646 00:08:45.646 --- 10.0.0.2 ping statistics --- 00:08:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.646 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:08:45.646 23:05:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:08:45.646 00:08:45.646 --- 10.0.0.1 ping statistics --- 00:08:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.646 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:45.646 23:05:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.646 23:05:08 -- nvmf/common.sh@410 -- # return 0 00:08:45.646 23:05:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.646 23:05:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.646 23:05:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:45.646 23:05:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:45.646 23:05:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.646 23:05:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:45.646 23:05:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:45.646 23:05:08 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:45.646 23:05:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.646 23:05:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:45.646 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 23:05:08 -- nvmf/common.sh@469 -- # nvmfpid=2659850 00:08:45.646 23:05:08 -- nvmf/common.sh@470 -- # waitforlisten 2659850 00:08:45.646 23:05:08 -- common/autotest_common.sh@819 -- # '[' -z 2659850 ']' 00:08:45.646 23:05:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.646 23:05:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.646 23:05:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.646 23:05:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.646 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:08:45.646 23:05:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.646 [2024-06-07 23:05:08.292500] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:45.647 [2024-06-07 23:05:08.292563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.907 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.907 [2024-06-07 23:05:08.364610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.907 [2024-06-07 23:05:08.403064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.907 [2024-06-07 23:05:08.403206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.907 [2024-06-07 23:05:08.403216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.907 [2024-06-07 23:05:08.403225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.907 [2024-06-07 23:05:08.403316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.907 [2024-06-07 23:05:08.403481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.907 [2024-06-07 23:05:08.403640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.907 [2024-06-07 23:05:08.403642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.479 23:05:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.479 23:05:09 -- common/autotest_common.sh@852 -- # return 0 00:08:46.479 23:05:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.479 23:05:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.479 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 23:05:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.479 23:05:09 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.479 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.479 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 [2024-06-07 23:05:09.108531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.479 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.479 23:05:09 -- target/discovery.sh@26 -- # seq 1 4 00:08:46.479 23:05:09 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.479 23:05:09 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:46.479 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.479 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 Null1 00:08:46.479 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.479 23:05:09 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.479 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.479 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.479 23:05:09 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:46.479 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.479 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 [2024-06-07 23:05:09.168848] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.741 23:05:09 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 Null2 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.741 23:05:09 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 Null3 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:46.741 23:05:09 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 Null4 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:46.741 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.741 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:46.741 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.741 23:05:09 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:47.002 00:08:47.002 Discovery Log Number of Records 6, Generation counter 6 00:08:47.002 =====Discovery Log Entry 0====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: current discovery subsystem 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4420 00:08:47.002 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: explicit discovery connections, duplicate discovery information 00:08:47.002 sectype: none 00:08:47.002 =====Discovery Log Entry 1====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: nvme subsystem 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4420 00:08:47.002 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: none 00:08:47.002 sectype: none 00:08:47.002 =====Discovery Log Entry 2====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: nvme subsystem 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4420 00:08:47.002 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: none 00:08:47.002 sectype: none 00:08:47.002 =====Discovery Log Entry 3====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: nvme subsystem 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4420 00:08:47.002 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: none 00:08:47.002 sectype: none 00:08:47.002 =====Discovery Log Entry 4====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: nvme subsystem 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4420 00:08:47.002 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: none 00:08:47.002 sectype: none 00:08:47.002 =====Discovery Log Entry 5====== 00:08:47.002 trtype: tcp 00:08:47.002 adrfam: ipv4 00:08:47.002 subtype: discovery subsystem referral 00:08:47.002 treq: not required 00:08:47.002 portid: 0 00:08:47.002 trsvcid: 4430 00:08:47.002 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:47.002 traddr: 10.0.0.2 00:08:47.002 eflags: none 00:08:47.002 sectype: none 00:08:47.002 23:05:09 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:47.002 Perform nvmf subsystem discovery via RPC 00:08:47.002 23:05:09 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:47.002 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.002 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.002 [2024-06-07 23:05:09.489799] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:47.002 [ 00:08:47.002 { 00:08:47.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:47.002 "subtype": "Discovery", 00:08:47.002 "listen_addresses": [ 00:08:47.002 { 00:08:47.002 "transport": "TCP", 00:08:47.002 "trtype": "TCP", 00:08:47.002 "adrfam": "IPv4", 00:08:47.002 "traddr": "10.0.0.2", 00:08:47.002 "trsvcid": "4420" 00:08:47.002 } 00:08:47.002 ], 00:08:47.002 "allow_any_host": true, 00:08:47.002 "hosts": [] 00:08:47.002 }, 00:08:47.002 { 00:08:47.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.002 "subtype": "NVMe", 00:08:47.002 "listen_addresses": [ 00:08:47.002 { 00:08:47.002 "transport": "TCP", 00:08:47.002 "trtype": "TCP", 00:08:47.002 "adrfam": "IPv4", 00:08:47.002 "traddr": "10.0.0.2", 00:08:47.002 "trsvcid": "4420" 00:08:47.002 } 00:08:47.002 ], 00:08:47.002 "allow_any_host": true, 00:08:47.002 "hosts": [], 00:08:47.002 "serial_number": "SPDK00000000000001", 00:08:47.002 "model_number": "SPDK bdev Controller", 00:08:47.002 "max_namespaces": 32, 00:08:47.002 "min_cntlid": 1, 00:08:47.002 "max_cntlid": 65519, 00:08:47.002 "namespaces": [ 00:08:47.002 { 00:08:47.002 "nsid": 1, 00:08:47.002 "bdev_name": "Null1", 00:08:47.002 "name": "Null1", 00:08:47.002 "nguid": "CA8EB866D946452EACE5F1D5F331EB69", 00:08:47.002 "uuid": "ca8eb866-d946-452e-ace5-f1d5f331eb69" 00:08:47.002 } 00:08:47.002 ] 00:08:47.002 }, 00:08:47.002 { 00:08:47.002 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:47.002 "subtype": "NVMe", 00:08:47.002 "listen_addresses": [ 00:08:47.002 { 00:08:47.002 "transport": "TCP", 00:08:47.002 "trtype": "TCP", 00:08:47.002 "adrfam": "IPv4", 00:08:47.002 "traddr": "10.0.0.2", 00:08:47.002 "trsvcid": "4420" 00:08:47.002 } 00:08:47.002 ], 00:08:47.002 "allow_any_host": true, 00:08:47.002 "hosts": [], 00:08:47.002 "serial_number": "SPDK00000000000002", 00:08:47.002 "model_number": "SPDK bdev Controller", 00:08:47.002 "max_namespaces": 32, 00:08:47.002 "min_cntlid": 1, 00:08:47.002 "max_cntlid": 65519, 00:08:47.002 "namespaces": [ 00:08:47.002 { 00:08:47.002 "nsid": 1, 00:08:47.002 "bdev_name": "Null2", 00:08:47.002 "name": "Null2", 00:08:47.002 "nguid": "F88329A83A314A4496A63CD3FCE3D5AF", 00:08:47.002 "uuid": "f88329a8-3a31-4a44-96a6-3cd3fce3d5af" 00:08:47.002 } 00:08:47.002 ] 00:08:47.002 }, 00:08:47.002 { 00:08:47.002 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:47.002 "subtype": "NVMe", 00:08:47.002 "listen_addresses": [ 00:08:47.002 { 00:08:47.002 "transport": "TCP", 00:08:47.002 "trtype": "TCP", 00:08:47.002 "adrfam": "IPv4", 00:08:47.002 "traddr": "10.0.0.2", 00:08:47.002 "trsvcid": "4420" 00:08:47.002 } 00:08:47.002 ], 00:08:47.002 "allow_any_host": true, 00:08:47.002 "hosts": [], 00:08:47.002 "serial_number": "SPDK00000000000003", 00:08:47.002 "model_number": "SPDK bdev Controller", 00:08:47.002 "max_namespaces": 32, 00:08:47.002 "min_cntlid": 1, 00:08:47.002 "max_cntlid": 65519, 00:08:47.002 "namespaces": [ 00:08:47.002 { 00:08:47.002 "nsid": 1, 00:08:47.002 "bdev_name": "Null3", 00:08:47.002 "name": "Null3", 00:08:47.002 "nguid": "30BBF4AB827B4D559F98FDBC8024BCC1", 00:08:47.002 "uuid": "30bbf4ab-827b-4d55-9f98-fdbc8024bcc1" 00:08:47.002 } 00:08:47.002 ] 00:08:47.002 }, 00:08:47.002 { 00:08:47.002 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:47.003 "subtype": "NVMe", 00:08:47.003 "listen_addresses": [ 00:08:47.003 { 00:08:47.003 "transport": "TCP", 00:08:47.003 "trtype": "TCP", 00:08:47.003 "adrfam": "IPv4", 00:08:47.003 "traddr": "10.0.0.2", 00:08:47.003 "trsvcid": "4420" 00:08:47.003 } 00:08:47.003 ], 00:08:47.003 "allow_any_host": true, 00:08:47.003 "hosts": [], 00:08:47.003 "serial_number": "SPDK00000000000004", 00:08:47.003 "model_number": "SPDK bdev Controller", 00:08:47.003 "max_namespaces": 32, 00:08:47.003 "min_cntlid": 1, 00:08:47.003 "max_cntlid": 65519, 00:08:47.003 "namespaces": [ 00:08:47.003 { 00:08:47.003 "nsid": 1, 00:08:47.003 "bdev_name": "Null4", 00:08:47.003 "name": "Null4", 00:08:47.003 "nguid": "5F9C1BBF5C974B3EB8C3D36E56A1B832", 00:08:47.003 "uuid": "5f9c1bbf-5c97-4b3e-b8c3-d36e56a1b832" 00:08:47.003 } 00:08:47.003 ] 00:08:47.003 } 00:08:47.003 ] 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@42 -- # seq 1 4 00:08:47.003 23:05:09 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.003 23:05:09 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.003 23:05:09 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.003 23:05:09 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:47.003 23:05:09 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:47.003 23:05:09 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:47.003 23:05:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.003 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:08:47.003 23:05:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.003 23:05:09 -- target/discovery.sh@49 -- # check_bdevs= 00:08:47.003 23:05:09 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:47.003 23:05:09 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:47.003 23:05:09 -- target/discovery.sh@57 -- # nvmftestfini 00:08:47.003 23:05:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:47.003 23:05:09 -- nvmf/common.sh@116 -- # sync 00:08:47.003 23:05:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:47.003 23:05:09 -- nvmf/common.sh@119 -- # set +e 00:08:47.003 23:05:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:47.003 23:05:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:47.003 rmmod nvme_tcp 00:08:47.003 rmmod nvme_fabrics 00:08:47.263 rmmod nvme_keyring 00:08:47.263 23:05:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:47.263 23:05:09 -- nvmf/common.sh@123 -- # set -e 00:08:47.263 23:05:09 -- nvmf/common.sh@124 -- # return 0 00:08:47.263 23:05:09 -- nvmf/common.sh@477 -- # '[' -n 2659850 ']' 00:08:47.263 23:05:09 -- nvmf/common.sh@478 -- # killprocess 2659850 00:08:47.263 23:05:09 -- common/autotest_common.sh@926 -- # '[' -z 2659850 ']' 00:08:47.263 23:05:09 -- common/autotest_common.sh@930 -- # kill -0 2659850 00:08:47.263 23:05:09 -- common/autotest_common.sh@931 -- # uname 00:08:47.263 23:05:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.263 23:05:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2659850 00:08:47.263 23:05:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.263 23:05:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.263 23:05:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2659850' 00:08:47.263 killing process with pid 2659850 00:08:47.263 23:05:09 -- common/autotest_common.sh@945 -- # kill 2659850 00:08:47.263 [2024-06-07 23:05:09.789826] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:47.263 23:05:09 -- common/autotest_common.sh@950 -- # wait 2659850 00:08:47.263 23:05:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:47.263 23:05:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:47.263 23:05:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:47.263 23:05:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.263 23:05:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:47.263 23:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.263 23:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.263 23:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.877 23:05:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:49.877 00:08:49.877 real 0m10.891s 00:08:49.877 user 0m8.067s 00:08:49.877 sys 0m5.577s 00:08:49.877 23:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.877 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:08:49.877 ************************************ 00:08:49.877 END TEST nvmf_discovery 00:08:49.877 ************************************ 00:08:49.877 23:05:12 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.877 23:05:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:49.877 23:05:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.877 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:49.877 ************************************ 00:08:49.877 START TEST nvmf_referrals 00:08:49.877 ************************************ 00:08:49.877 23:05:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:49.877 * Looking for test storage... 00:08:49.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.877 23:05:12 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.877 23:05:12 -- nvmf/common.sh@7 -- # uname -s 00:08:49.877 23:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.877 23:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.877 23:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.877 23:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.877 23:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.877 23:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.877 23:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.877 23:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.877 23:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.877 23:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.877 23:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.877 23:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:49.877 23:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.877 23:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.877 23:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.877 23:05:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.877 23:05:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.877 23:05:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.877 23:05:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.877 23:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.877 23:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.877 23:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.877 23:05:12 -- paths/export.sh@5 -- # export PATH 00:08:49.877 23:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.877 23:05:12 -- nvmf/common.sh@46 -- # : 0 00:08:49.877 23:05:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:49.877 23:05:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:49.877 23:05:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:49.877 23:05:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.877 23:05:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.877 23:05:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:49.877 23:05:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:49.877 23:05:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:49.877 23:05:12 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:49.877 23:05:12 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:49.877 23:05:12 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:49.877 23:05:12 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:49.877 23:05:12 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:49.877 23:05:12 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:49.877 23:05:12 -- target/referrals.sh@37 -- # nvmftestinit 00:08:49.877 23:05:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:49.877 23:05:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.877 23:05:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:49.877 23:05:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:49.877 23:05:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:49.877 23:05:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.877 23:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.877 23:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.877 23:05:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:49.878 23:05:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:49.878 23:05:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:49.878 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.467 23:05:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:56.467 23:05:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:56.467 23:05:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:56.467 23:05:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:56.467 23:05:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:56.467 23:05:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:56.467 23:05:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:56.467 23:05:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:56.467 23:05:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:56.467 23:05:18 -- nvmf/common.sh@295 -- # e810=() 00:08:56.467 23:05:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:56.467 23:05:18 -- nvmf/common.sh@296 -- # x722=() 00:08:56.467 23:05:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:56.467 23:05:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:56.467 23:05:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:56.467 23:05:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.467 23:05:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:56.467 23:05:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:56.467 23:05:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:56.467 23:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:56.467 23:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:56.467 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:56.467 23:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:56.467 23:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:56.467 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:56.467 23:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:56.467 23:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:56.468 23:05:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:56.468 23:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.468 23:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:56.468 23:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.468 23:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:56.468 Found net devices under 0000:31:00.0: cvl_0_0 00:08:56.468 23:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.468 23:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:56.468 23:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.468 23:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:56.468 23:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.468 23:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:56.468 Found net devices under 0000:31:00.1: cvl_0_1 00:08:56.468 23:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.468 23:05:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:56.468 23:05:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:56.468 23:05:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:56.468 23:05:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:56.468 23:05:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.468 23:05:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.468 23:05:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.468 23:05:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:56.468 23:05:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.468 23:05:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.468 23:05:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:56.468 23:05:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.468 23:05:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.468 23:05:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:56.468 23:05:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:56.468 23:05:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.468 23:05:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.468 23:05:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.468 23:05:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.468 23:05:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:56.468 23:05:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.729 23:05:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.729 23:05:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.729 23:05:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:56.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:08:56.729 00:08:56.729 --- 10.0.0.2 ping statistics --- 00:08:56.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.729 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:08:56.729 23:05:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:08:56.729 00:08:56.729 --- 10.0.0.1 ping statistics --- 00:08:56.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.729 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:08:56.729 23:05:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.729 23:05:19 -- nvmf/common.sh@410 -- # return 0 00:08:56.729 23:05:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:56.729 23:05:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.729 23:05:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:56.729 23:05:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:56.729 23:05:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.729 23:05:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:56.729 23:05:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:56.729 23:05:19 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:56.729 23:05:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:56.729 23:05:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:56.729 23:05:19 -- common/autotest_common.sh@10 -- # set +x 00:08:56.729 23:05:19 -- nvmf/common.sh@469 -- # nvmfpid=2664307 00:08:56.729 23:05:19 -- nvmf/common.sh@470 -- # waitforlisten 2664307 00:08:56.729 23:05:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.729 23:05:19 -- common/autotest_common.sh@819 -- # '[' -z 2664307 ']' 00:08:56.729 23:05:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.729 23:05:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.729 23:05:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.729 23:05:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.729 23:05:19 -- common/autotest_common.sh@10 -- # set +x 00:08:56.729 [2024-06-07 23:05:19.333481] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:56.729 [2024-06-07 23:05:19.333549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.729 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.729 [2024-06-07 23:05:19.404768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.990 [2024-06-07 23:05:19.442676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.990 [2024-06-07 23:05:19.442820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.990 [2024-06-07 23:05:19.442830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.990 [2024-06-07 23:05:19.442838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.990 [2024-06-07 23:05:19.442980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.990 [2024-06-07 23:05:19.443144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.990 [2024-06-07 23:05:19.443288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.990 [2024-06-07 23:05:19.443289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.562 23:05:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.562 23:05:20 -- common/autotest_common.sh@852 -- # return 0 00:08:57.562 23:05:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:57.562 23:05:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.562 23:05:20 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 [2024-06-07 23:05:20.138503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 [2024-06-07 23:05:20.150666] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.562 23:05:20 -- target/referrals.sh@48 -- # jq length 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.562 23:05:20 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:57.562 23:05:20 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:57.562 23:05:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.562 23:05:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.562 23:05:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.562 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.562 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.562 23:05:20 -- target/referrals.sh@21 -- # sort 00:08:57.822 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.822 23:05:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.822 23:05:20 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.823 23:05:20 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:57.823 23:05:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.823 23:05:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.823 23:05:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.823 23:05:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.823 23:05:20 -- target/referrals.sh@26 -- # sort 00:08:57.823 23:05:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.823 23:05:20 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.823 23:05:20 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.823 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.823 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:57.823 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.823 23:05:20 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.823 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.823 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.084 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.084 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.084 23:05:20 -- target/referrals.sh@56 -- # jq length 00:08:58.084 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.084 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:58.084 23:05:20 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:58.084 23:05:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.084 23:05:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # sort 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # echo 00:08:58.084 23:05:20 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:58.084 23:05:20 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:58.084 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.084 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.084 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.084 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:58.084 23:05:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.084 23:05:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.084 23:05:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.084 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.084 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.084 23:05:20 -- target/referrals.sh@21 -- # sort 00:08:58.084 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:58.084 23:05:20 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.084 23:05:20 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:58.084 23:05:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.084 23:05:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.084 23:05:20 -- target/referrals.sh@26 -- # sort 00:08:58.346 23:05:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:58.346 23:05:20 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.346 23:05:20 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:58.346 23:05:20 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:58.346 23:05:20 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.346 23:05:20 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.346 23:05:20 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.346 23:05:21 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.607 23:05:21 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.607 23:05:21 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:58.607 23:05:21 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.607 23:05:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.607 23:05:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.607 23:05:21 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.607 23:05:21 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.607 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.607 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.607 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.607 23:05:21 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:58.607 23:05:21 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.607 23:05:21 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.607 23:05:21 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.607 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.607 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:58.607 23:05:21 -- target/referrals.sh@21 -- # sort 00:08:58.607 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.607 23:05:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:58.607 23:05:21 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.607 23:05:21 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:58.607 23:05:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.607 23:05:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.607 23:05:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.607 23:05:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.607 23:05:21 -- target/referrals.sh@26 -- # sort 00:08:58.868 23:05:21 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:58.868 23:05:21 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.868 23:05:21 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:58.868 23:05:21 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:58.868 23:05:21 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.868 23:05:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.868 23:05:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.868 23:05:21 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:58.868 23:05:21 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.868 23:05:21 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:58.868 23:05:21 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.868 23:05:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.868 23:05:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.868 23:05:21 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.868 23:05:21 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:58.868 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.868 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:59.128 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.128 23:05:21 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.128 23:05:21 -- target/referrals.sh@82 -- # jq length 00:08:59.128 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.128 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:59.128 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.128 23:05:21 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:59.128 23:05:21 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:59.128 23:05:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.128 23:05:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.128 23:05:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.128 23:05:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.128 23:05:21 -- target/referrals.sh@26 -- # sort 00:08:59.128 23:05:21 -- target/referrals.sh@26 -- # echo 00:08:59.128 23:05:21 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:59.128 23:05:21 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:59.128 23:05:21 -- target/referrals.sh@86 -- # nvmftestfini 00:08:59.128 23:05:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.128 23:05:21 -- nvmf/common.sh@116 -- # sync 00:08:59.128 23:05:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:59.128 23:05:21 -- nvmf/common.sh@119 -- # set +e 00:08:59.128 23:05:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.128 23:05:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:59.128 rmmod nvme_tcp 00:08:59.128 rmmod nvme_fabrics 00:08:59.128 rmmod nvme_keyring 00:08:59.128 23:05:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.128 23:05:21 -- nvmf/common.sh@123 -- # set -e 00:08:59.128 23:05:21 -- nvmf/common.sh@124 -- # return 0 00:08:59.128 23:05:21 -- nvmf/common.sh@477 -- # '[' -n 2664307 ']' 00:08:59.128 23:05:21 -- nvmf/common.sh@478 -- # killprocess 2664307 00:08:59.128 23:05:21 -- common/autotest_common.sh@926 -- # '[' -z 2664307 ']' 00:08:59.128 23:05:21 -- common/autotest_common.sh@930 -- # kill -0 2664307 00:08:59.128 23:05:21 -- common/autotest_common.sh@931 -- # uname 00:08:59.128 23:05:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.128 23:05:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2664307 00:08:59.390 23:05:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.390 23:05:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.390 23:05:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2664307' 00:08:59.390 killing process with pid 2664307 00:08:59.390 23:05:21 -- common/autotest_common.sh@945 -- # kill 2664307 00:08:59.390 23:05:21 -- common/autotest_common.sh@950 -- # wait 2664307 00:08:59.390 23:05:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:59.390 23:05:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:59.390 23:05:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:59.390 23:05:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.390 23:05:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:59.390 23:05:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.390 23:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.390 23:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.935 23:05:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:01.935 00:09:01.935 real 0m12.002s 00:09:01.935 user 0m12.851s 00:09:01.935 sys 0m5.867s 00:09:01.935 23:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.935 23:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:01.935 ************************************ 00:09:01.935 END TEST nvmf_referrals 00:09:01.935 ************************************ 00:09:01.935 23:05:24 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.935 23:05:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:01.935 23:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.935 23:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:01.935 ************************************ 00:09:01.935 START TEST nvmf_connect_disconnect 00:09:01.935 ************************************ 00:09:01.935 23:05:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.935 * Looking for test storage... 00:09:01.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.935 23:05:24 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.935 23:05:24 -- nvmf/common.sh@7 -- # uname -s 00:09:01.935 23:05:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.935 23:05:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.935 23:05:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.936 23:05:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.936 23:05:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.936 23:05:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.936 23:05:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.936 23:05:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.936 23:05:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.936 23:05:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.936 23:05:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.936 23:05:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.936 23:05:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.936 23:05:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.936 23:05:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.936 23:05:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.936 23:05:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.936 23:05:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.936 23:05:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.936 23:05:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.936 23:05:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.936 23:05:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.936 23:05:24 -- paths/export.sh@5 -- # export PATH 00:09:01.936 23:05:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.936 23:05:24 -- nvmf/common.sh@46 -- # : 0 00:09:01.936 23:05:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:01.936 23:05:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:01.936 23:05:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:01.936 23:05:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.936 23:05:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.936 23:05:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:01.936 23:05:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:01.936 23:05:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:01.936 23:05:24 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.936 23:05:24 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.936 23:05:24 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:01.936 23:05:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:01.936 23:05:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.936 23:05:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:01.936 23:05:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:01.936 23:05:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:01.936 23:05:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.936 23:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.936 23:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.936 23:05:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:01.936 23:05:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:01.936 23:05:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:01.936 23:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.077 23:05:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:10.077 23:05:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:10.077 23:05:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:10.077 23:05:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:10.077 23:05:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:10.077 23:05:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:10.077 23:05:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:10.077 23:05:31 -- nvmf/common.sh@294 -- # net_devs=() 00:09:10.077 23:05:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:10.077 23:05:31 -- nvmf/common.sh@295 -- # e810=() 00:09:10.077 23:05:31 -- nvmf/common.sh@295 -- # local -ga e810 00:09:10.077 23:05:31 -- nvmf/common.sh@296 -- # x722=() 00:09:10.077 23:05:31 -- nvmf/common.sh@296 -- # local -ga x722 00:09:10.077 23:05:31 -- nvmf/common.sh@297 -- # mlx=() 00:09:10.077 23:05:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:10.077 23:05:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.077 23:05:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.078 23:05:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:10.078 23:05:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:10.078 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:10.078 23:05:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:10.078 23:05:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:10.078 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:10.078 23:05:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:10.078 23:05:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.078 23:05:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.078 23:05:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:10.078 Found net devices under 0000:31:00.0: cvl_0_0 00:09:10.078 23:05:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:10.078 23:05:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.078 23:05:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.078 23:05:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:10.078 Found net devices under 0000:31:00.1: cvl_0_1 00:09:10.078 23:05:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:10.078 23:05:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:10.078 23:05:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.078 23:05:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.078 23:05:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:10.078 23:05:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.078 23:05:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.078 23:05:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:10.078 23:05:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.078 23:05:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.078 23:05:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:10.078 23:05:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:10.078 23:05:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.078 23:05:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.078 23:05:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.078 23:05:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.078 23:05:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:10.078 23:05:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.078 23:05:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.078 23:05:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.078 23:05:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:10.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.744 ms 00:09:10.078 00:09:10.078 --- 10.0.0.2 ping statistics --- 00:09:10.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.078 rtt min/avg/max/mdev = 0.744/0.744/0.744/0.000 ms 00:09:10.078 23:05:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:09:10.078 00:09:10.078 --- 10.0.0.1 ping statistics --- 00:09:10.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.078 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:10.078 23:05:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.078 23:05:31 -- nvmf/common.sh@410 -- # return 0 00:09:10.078 23:05:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:10.078 23:05:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.078 23:05:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:10.078 23:05:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.078 23:05:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:10.078 23:05:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:10.078 23:05:31 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:10.078 23:05:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:10.078 23:05:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:10.078 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 23:05:31 -- nvmf/common.sh@469 -- # nvmfpid=2669157 00:09:10.078 23:05:31 -- nvmf/common.sh@470 -- # waitforlisten 2669157 00:09:10.078 23:05:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.078 23:05:31 -- common/autotest_common.sh@819 -- # '[' -z 2669157 ']' 00:09:10.078 23:05:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.078 23:05:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:10.078 23:05:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.078 23:05:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:10.078 23:05:31 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 [2024-06-07 23:05:31.653209] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:10.078 [2024-06-07 23:05:31.653302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.078 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.078 [2024-06-07 23:05:31.725895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.078 [2024-06-07 23:05:31.763558] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.078 [2024-06-07 23:05:31.763704] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.078 [2024-06-07 23:05:31.763714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.078 [2024-06-07 23:05:31.763722] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.078 [2024-06-07 23:05:31.763883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.078 [2024-06-07 23:05:31.764004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.078 [2024-06-07 23:05:31.764161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.078 [2024-06-07 23:05:31.764162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.078 23:05:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:10.078 23:05:32 -- common/autotest_common.sh@852 -- # return 0 00:09:10.078 23:05:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:10.078 23:05:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 23:05:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:10.078 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 [2024-06-07 23:05:32.462470] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.078 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:10.078 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.078 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.078 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.078 23:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.078 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.078 [2024-06-07 23:05:32.521873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.078 23:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:10.078 23:05:32 -- target/connect_disconnect.sh@34 -- # set +x 00:09:12.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.672 23:09:22 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:59.672 23:09:22 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:59.672 23:09:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:59.672 23:09:22 -- nvmf/common.sh@116 -- # sync 00:12:59.672 23:09:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.672 23:09:22 -- nvmf/common.sh@119 -- # set +e 00:12:59.672 23:09:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.672 23:09:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.672 rmmod nvme_tcp 00:12:59.672 rmmod nvme_fabrics 00:12:59.672 rmmod nvme_keyring 00:12:59.672 23:09:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.672 23:09:22 -- nvmf/common.sh@123 -- # set -e 00:12:59.672 23:09:22 -- nvmf/common.sh@124 -- # return 0 00:12:59.672 23:09:22 -- nvmf/common.sh@477 -- # '[' -n 2669157 ']' 00:12:59.672 23:09:22 -- nvmf/common.sh@478 -- # killprocess 2669157 00:12:59.672 23:09:22 -- common/autotest_common.sh@926 -- # '[' -z 2669157 ']' 00:12:59.672 23:09:22 -- common/autotest_common.sh@930 -- # kill -0 2669157 00:12:59.672 23:09:22 -- common/autotest_common.sh@931 -- # uname 00:12:59.672 23:09:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:59.672 23:09:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2669157 00:12:59.933 23:09:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:59.933 23:09:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:59.933 23:09:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2669157' 00:12:59.933 killing process with pid 2669157 00:12:59.933 23:09:22 -- common/autotest_common.sh@945 -- # kill 2669157 00:12:59.933 23:09:22 -- common/autotest_common.sh@950 -- # wait 2669157 00:12:59.933 23:09:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.933 23:09:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.933 23:09:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.933 23:09:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.933 23:09:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.933 23:09:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.933 23:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.933 23:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.480 23:09:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:02.480 00:13:02.480 real 4m0.485s 00:13:02.480 user 15m17.219s 00:13:02.480 sys 0m19.252s 00:13:02.480 23:09:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.480 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:02.480 ************************************ 00:13:02.480 END TEST nvmf_connect_disconnect 00:13:02.480 ************************************ 00:13:02.480 23:09:24 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:02.480 23:09:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:02.480 23:09:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:02.480 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:02.480 ************************************ 00:13:02.480 START TEST nvmf_multitarget 00:13:02.480 ************************************ 00:13:02.480 23:09:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:02.480 * Looking for test storage... 00:13:02.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.480 23:09:24 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.480 23:09:24 -- nvmf/common.sh@7 -- # uname -s 00:13:02.480 23:09:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.480 23:09:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.480 23:09:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.480 23:09:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.480 23:09:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.480 23:09:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.480 23:09:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.480 23:09:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.480 23:09:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.480 23:09:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.480 23:09:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.480 23:09:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.480 23:09:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.480 23:09:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.480 23:09:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.480 23:09:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.480 23:09:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.480 23:09:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.480 23:09:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.480 23:09:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.480 23:09:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.481 23:09:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.481 23:09:24 -- paths/export.sh@5 -- # export PATH 00:13:02.481 23:09:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.481 23:09:24 -- nvmf/common.sh@46 -- # : 0 00:13:02.481 23:09:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:02.481 23:09:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:02.481 23:09:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:02.481 23:09:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.481 23:09:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.481 23:09:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:02.481 23:09:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:02.481 23:09:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:02.481 23:09:24 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:02.481 23:09:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:02.481 23:09:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:02.481 23:09:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.481 23:09:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:02.481 23:09:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:02.481 23:09:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:02.481 23:09:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.481 23:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.481 23:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.481 23:09:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:02.481 23:09:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:02.481 23:09:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:02.481 23:09:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.639 23:09:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:10.639 23:09:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:10.639 23:09:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:10.639 23:09:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:10.639 23:09:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:10.639 23:09:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:10.639 23:09:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:10.639 23:09:31 -- nvmf/common.sh@294 -- # net_devs=() 00:13:10.639 23:09:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:10.639 23:09:31 -- nvmf/common.sh@295 -- # e810=() 00:13:10.639 23:09:31 -- nvmf/common.sh@295 -- # local -ga e810 00:13:10.639 23:09:31 -- nvmf/common.sh@296 -- # x722=() 00:13:10.639 23:09:31 -- nvmf/common.sh@296 -- # local -ga x722 00:13:10.639 23:09:31 -- nvmf/common.sh@297 -- # mlx=() 00:13:10.639 23:09:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:10.639 23:09:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.639 23:09:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.640 23:09:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.640 23:09:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.640 23:09:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.640 23:09:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.640 23:09:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:10.640 23:09:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:10.640 23:09:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:10.640 23:09:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:10.640 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:10.640 23:09:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:10.640 23:09:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:10.640 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:10.640 23:09:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:10.640 23:09:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.640 23:09:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.640 23:09:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:10.640 Found net devices under 0000:31:00.0: cvl_0_0 00:13:10.640 23:09:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.640 23:09:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:10.640 23:09:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.640 23:09:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.640 23:09:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:10.640 Found net devices under 0000:31:00.1: cvl_0_1 00:13:10.640 23:09:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.640 23:09:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:10.640 23:09:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:10.640 23:09:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:10.640 23:09:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.640 23:09:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.640 23:09:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.640 23:09:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:10.640 23:09:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.640 23:09:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.640 23:09:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:10.640 23:09:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.640 23:09:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.640 23:09:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:10.640 23:09:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:10.640 23:09:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.640 23:09:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.640 23:09:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.640 23:09:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.640 23:09:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:10.640 23:09:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.640 23:09:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.640 23:09:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.640 23:09:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:10.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:13:10.640 00:13:10.640 --- 10.0.0.2 ping statistics --- 00:13:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.640 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:13:10.640 23:09:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:10.640 00:13:10.640 --- 10.0.0.1 ping statistics --- 00:13:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.640 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:10.640 23:09:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.640 23:09:32 -- nvmf/common.sh@410 -- # return 0 00:13:10.641 23:09:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:10.641 23:09:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.641 23:09:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:10.641 23:09:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:10.641 23:09:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.641 23:09:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:10.641 23:09:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:10.641 23:09:32 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:10.641 23:09:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:10.641 23:09:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:10.641 23:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:10.641 23:09:32 -- nvmf/common.sh@469 -- # nvmfpid=2721376 00:13:10.641 23:09:32 -- nvmf/common.sh@470 -- # waitforlisten 2721376 00:13:10.641 23:09:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.641 23:09:32 -- common/autotest_common.sh@819 -- # '[' -z 2721376 ']' 00:13:10.641 23:09:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.641 23:09:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:10.641 23:09:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.641 23:09:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:10.641 23:09:32 -- common/autotest_common.sh@10 -- # set +x 00:13:10.641 [2024-06-07 23:09:32.237702] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:10.641 [2024-06-07 23:09:32.237763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.641 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.641 [2024-06-07 23:09:32.310210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.641 [2024-06-07 23:09:32.348765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.641 [2024-06-07 23:09:32.348913] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.641 [2024-06-07 23:09:32.348924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.641 [2024-06-07 23:09:32.348931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.641 [2024-06-07 23:09:32.349073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.641 [2024-06-07 23:09:32.349216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.641 [2024-06-07 23:09:32.349369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.641 [2024-06-07 23:09:32.349519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.641 23:09:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.641 23:09:33 -- common/autotest_common.sh@852 -- # return 0 00:13:10.641 23:09:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:10.641 23:09:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:10.641 23:09:33 -- common/autotest_common.sh@10 -- # set +x 00:13:10.641 23:09:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.641 23:09:33 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.641 23:09:33 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.641 23:09:33 -- target/multitarget.sh@21 -- # jq length 00:13:10.641 23:09:33 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.641 23:09:33 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.641 "nvmf_tgt_1" 00:13:10.641 23:09:33 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.905 "nvmf_tgt_2" 00:13:10.905 23:09:33 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.905 23:09:33 -- target/multitarget.sh@28 -- # jq length 00:13:10.905 23:09:33 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:10.905 23:09:33 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:10.905 true 00:13:10.905 23:09:33 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:11.167 true 00:13:11.167 23:09:33 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.167 23:09:33 -- target/multitarget.sh@35 -- # jq length 00:13:11.167 23:09:33 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:11.167 23:09:33 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:11.167 23:09:33 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:11.167 23:09:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.167 23:09:33 -- nvmf/common.sh@116 -- # sync 00:13:11.167 23:09:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.167 23:09:33 -- nvmf/common.sh@119 -- # set +e 00:13:11.167 23:09:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.167 23:09:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.167 rmmod nvme_tcp 00:13:11.167 rmmod nvme_fabrics 00:13:11.167 rmmod nvme_keyring 00:13:11.167 23:09:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.167 23:09:33 -- nvmf/common.sh@123 -- # set -e 00:13:11.167 23:09:33 -- nvmf/common.sh@124 -- # return 0 00:13:11.167 23:09:33 -- nvmf/common.sh@477 -- # '[' -n 2721376 ']' 00:13:11.167 23:09:33 -- nvmf/common.sh@478 -- # killprocess 2721376 00:13:11.167 23:09:33 -- common/autotest_common.sh@926 -- # '[' -z 2721376 ']' 00:13:11.167 23:09:33 -- common/autotest_common.sh@930 -- # kill -0 2721376 00:13:11.167 23:09:33 -- common/autotest_common.sh@931 -- # uname 00:13:11.167 23:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.167 23:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2721376 00:13:11.428 23:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.428 23:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.428 23:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2721376' 00:13:11.428 killing process with pid 2721376 00:13:11.428 23:09:33 -- common/autotest_common.sh@945 -- # kill 2721376 00:13:11.428 23:09:33 -- common/autotest_common.sh@950 -- # wait 2721376 00:13:11.428 23:09:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:11.428 23:09:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:11.428 23:09:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:11.428 23:09:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.428 23:09:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:11.428 23:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.428 23:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.428 23:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.974 23:09:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:13.974 00:13:13.974 real 0m11.447s 00:13:13.974 user 0m9.263s 00:13:13.974 sys 0m5.986s 00:13:13.974 23:09:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.974 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.974 ************************************ 00:13:13.974 END TEST nvmf_multitarget 00:13:13.974 ************************************ 00:13:13.974 23:09:36 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.974 23:09:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:13.974 23:09:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.974 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.974 ************************************ 00:13:13.974 START TEST nvmf_rpc 00:13:13.974 ************************************ 00:13:13.974 23:09:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.974 * Looking for test storage... 00:13:13.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.974 23:09:36 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.974 23:09:36 -- nvmf/common.sh@7 -- # uname -s 00:13:13.974 23:09:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.974 23:09:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.974 23:09:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.974 23:09:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.974 23:09:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.974 23:09:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.974 23:09:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.974 23:09:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.974 23:09:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.974 23:09:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.974 23:09:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.974 23:09:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.974 23:09:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.974 23:09:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.974 23:09:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.974 23:09:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.974 23:09:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.974 23:09:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.974 23:09:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.974 23:09:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.974 23:09:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.974 23:09:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.974 23:09:36 -- paths/export.sh@5 -- # export PATH 00:13:13.974 23:09:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.974 23:09:36 -- nvmf/common.sh@46 -- # : 0 00:13:13.974 23:09:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.974 23:09:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.974 23:09:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.974 23:09:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.974 23:09:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.974 23:09:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.974 23:09:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.974 23:09:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.974 23:09:36 -- target/rpc.sh@11 -- # loops=5 00:13:13.974 23:09:36 -- target/rpc.sh@23 -- # nvmftestinit 00:13:13.974 23:09:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.974 23:09:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.974 23:09:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.974 23:09:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.974 23:09:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.974 23:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.974 23:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.974 23:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.974 23:09:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:13.974 23:09:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:13.975 23:09:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:13.975 23:09:36 -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 23:09:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.665 23:09:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:20.665 23:09:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:20.665 23:09:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:20.665 23:09:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:20.665 23:09:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:20.665 23:09:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:20.665 23:09:43 -- nvmf/common.sh@294 -- # net_devs=() 00:13:20.665 23:09:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:20.665 23:09:43 -- nvmf/common.sh@295 -- # e810=() 00:13:20.665 23:09:43 -- nvmf/common.sh@295 -- # local -ga e810 00:13:20.665 23:09:43 -- nvmf/common.sh@296 -- # x722=() 00:13:20.665 23:09:43 -- nvmf/common.sh@296 -- # local -ga x722 00:13:20.665 23:09:43 -- nvmf/common.sh@297 -- # mlx=() 00:13:20.665 23:09:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:20.665 23:09:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.665 23:09:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:20.665 23:09:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:20.665 23:09:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.665 23:09:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:20.665 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:20.665 23:09:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:20.665 23:09:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:20.665 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:20.665 23:09:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.665 23:09:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.665 23:09:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.665 23:09:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:20.665 Found net devices under 0000:31:00.0: cvl_0_0 00:13:20.665 23:09:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.665 23:09:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:20.665 23:09:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.665 23:09:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.665 23:09:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:20.665 Found net devices under 0000:31:00.1: cvl_0_1 00:13:20.665 23:09:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.665 23:09:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:20.665 23:09:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:20.665 23:09:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:20.665 23:09:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.665 23:09:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.665 23:09:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.665 23:09:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:20.665 23:09:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.665 23:09:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.665 23:09:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:20.665 23:09:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.665 23:09:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.665 23:09:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:20.665 23:09:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:20.665 23:09:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.665 23:09:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.665 23:09:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.665 23:09:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.665 23:09:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:20.665 23:09:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.927 23:09:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.927 23:09:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.927 23:09:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:20.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:13:20.927 00:13:20.927 --- 10.0.0.2 ping statistics --- 00:13:20.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.927 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:13:20.927 23:09:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:13:20.927 00:13:20.927 --- 10.0.0.1 ping statistics --- 00:13:20.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.927 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:20.927 23:09:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.927 23:09:43 -- nvmf/common.sh@410 -- # return 0 00:13:20.927 23:09:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.927 23:09:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.927 23:09:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.927 23:09:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.927 23:09:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.927 23:09:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.927 23:09:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.927 23:09:43 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:20.927 23:09:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:20.927 23:09:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:20.927 23:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:20.927 23:09:43 -- nvmf/common.sh@469 -- # nvmfpid=2726108 00:13:20.927 23:09:43 -- nvmf/common.sh@470 -- # waitforlisten 2726108 00:13:20.927 23:09:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.927 23:09:43 -- common/autotest_common.sh@819 -- # '[' -z 2726108 ']' 00:13:20.927 23:09:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.927 23:09:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:20.927 23:09:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.927 23:09:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:20.927 23:09:43 -- common/autotest_common.sh@10 -- # set +x 00:13:20.927 [2024-06-07 23:09:43.529581] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:20.927 [2024-06-07 23:09:43.529640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.927 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.927 [2024-06-07 23:09:43.601131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.188 [2024-06-07 23:09:43.638945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:21.188 [2024-06-07 23:09:43.639099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.189 [2024-06-07 23:09:43.639108] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.189 [2024-06-07 23:09:43.639115] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.189 [2024-06-07 23:09:43.639284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.189 [2024-06-07 23:09:43.639402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.189 [2024-06-07 23:09:43.639572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.189 [2024-06-07 23:09:43.639573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.760 23:09:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:21.760 23:09:44 -- common/autotest_common.sh@852 -- # return 0 00:13:21.760 23:09:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:21.760 23:09:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:21.760 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:21.760 23:09:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.760 23:09:44 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:21.760 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.760 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:21.760 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.760 23:09:44 -- target/rpc.sh@26 -- # stats='{ 00:13:21.760 "tick_rate": 2400000000, 00:13:21.760 "poll_groups": [ 00:13:21.760 { 00:13:21.760 "name": "nvmf_tgt_poll_group_0", 00:13:21.760 "admin_qpairs": 0, 00:13:21.760 "io_qpairs": 0, 00:13:21.760 "current_admin_qpairs": 0, 00:13:21.760 "current_io_qpairs": 0, 00:13:21.760 "pending_bdev_io": 0, 00:13:21.760 "completed_nvme_io": 0, 00:13:21.760 "transports": [] 00:13:21.760 }, 00:13:21.760 { 00:13:21.760 "name": "nvmf_tgt_poll_group_1", 00:13:21.760 "admin_qpairs": 0, 00:13:21.760 "io_qpairs": 0, 00:13:21.760 "current_admin_qpairs": 0, 00:13:21.760 "current_io_qpairs": 0, 00:13:21.760 "pending_bdev_io": 0, 00:13:21.760 "completed_nvme_io": 0, 00:13:21.760 "transports": [] 00:13:21.760 }, 00:13:21.760 { 00:13:21.760 "name": "nvmf_tgt_poll_group_2", 00:13:21.760 "admin_qpairs": 0, 00:13:21.760 "io_qpairs": 0, 00:13:21.760 "current_admin_qpairs": 0, 00:13:21.760 "current_io_qpairs": 0, 00:13:21.760 "pending_bdev_io": 0, 00:13:21.760 "completed_nvme_io": 0, 00:13:21.760 "transports": [] 00:13:21.760 }, 00:13:21.760 { 00:13:21.760 "name": "nvmf_tgt_poll_group_3", 00:13:21.760 "admin_qpairs": 0, 00:13:21.760 "io_qpairs": 0, 00:13:21.760 "current_admin_qpairs": 0, 00:13:21.760 "current_io_qpairs": 0, 00:13:21.760 "pending_bdev_io": 0, 00:13:21.760 "completed_nvme_io": 0, 00:13:21.760 "transports": [] 00:13:21.760 } 00:13:21.760 ] 00:13:21.760 }' 00:13:21.760 23:09:44 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:21.760 23:09:44 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:21.760 23:09:44 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:21.760 23:09:44 -- target/rpc.sh@15 -- # wc -l 00:13:21.760 23:09:44 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:21.760 23:09:44 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:22.022 23:09:44 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:22.022 23:09:44 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 [2024-06-07 23:09:44.467957] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@33 -- # stats='{ 00:13:22.022 "tick_rate": 2400000000, 00:13:22.022 "poll_groups": [ 00:13:22.022 { 00:13:22.022 "name": "nvmf_tgt_poll_group_0", 00:13:22.022 "admin_qpairs": 0, 00:13:22.022 "io_qpairs": 0, 00:13:22.022 "current_admin_qpairs": 0, 00:13:22.022 "current_io_qpairs": 0, 00:13:22.022 "pending_bdev_io": 0, 00:13:22.022 "completed_nvme_io": 0, 00:13:22.022 "transports": [ 00:13:22.022 { 00:13:22.022 "trtype": "TCP" 00:13:22.022 } 00:13:22.022 ] 00:13:22.022 }, 00:13:22.022 { 00:13:22.022 "name": "nvmf_tgt_poll_group_1", 00:13:22.022 "admin_qpairs": 0, 00:13:22.022 "io_qpairs": 0, 00:13:22.022 "current_admin_qpairs": 0, 00:13:22.022 "current_io_qpairs": 0, 00:13:22.022 "pending_bdev_io": 0, 00:13:22.022 "completed_nvme_io": 0, 00:13:22.022 "transports": [ 00:13:22.022 { 00:13:22.022 "trtype": "TCP" 00:13:22.022 } 00:13:22.022 ] 00:13:22.022 }, 00:13:22.022 { 00:13:22.022 "name": "nvmf_tgt_poll_group_2", 00:13:22.022 "admin_qpairs": 0, 00:13:22.022 "io_qpairs": 0, 00:13:22.022 "current_admin_qpairs": 0, 00:13:22.022 "current_io_qpairs": 0, 00:13:22.022 "pending_bdev_io": 0, 00:13:22.022 "completed_nvme_io": 0, 00:13:22.022 "transports": [ 00:13:22.022 { 00:13:22.022 "trtype": "TCP" 00:13:22.022 } 00:13:22.022 ] 00:13:22.022 }, 00:13:22.022 { 00:13:22.022 "name": "nvmf_tgt_poll_group_3", 00:13:22.022 "admin_qpairs": 0, 00:13:22.022 "io_qpairs": 0, 00:13:22.022 "current_admin_qpairs": 0, 00:13:22.022 "current_io_qpairs": 0, 00:13:22.022 "pending_bdev_io": 0, 00:13:22.022 "completed_nvme_io": 0, 00:13:22.022 "transports": [ 00:13:22.022 { 00:13:22.022 "trtype": "TCP" 00:13:22.022 } 00:13:22.022 ] 00:13:22.022 } 00:13:22.022 ] 00:13:22.022 }' 00:13:22.022 23:09:44 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.022 23:09:44 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:22.022 23:09:44 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.022 23:09:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.022 23:09:44 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:22.022 23:09:44 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:22.022 23:09:44 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:22.022 23:09:44 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:22.022 23:09:44 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 Malloc1 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.022 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.022 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 [2024-06-07 23:09:44.635755] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.022 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.022 23:09:44 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:22.022 23:09:44 -- common/autotest_common.sh@640 -- # local es=0 00:13:22.022 23:09:44 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:22.022 23:09:44 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:22.022 23:09:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:22.022 23:09:44 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:22.022 23:09:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:22.022 23:09:44 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:22.023 23:09:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:22.023 23:09:44 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:22.023 23:09:44 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:22.023 23:09:44 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:22.023 [2024-06-07 23:09:44.662592] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:22.023 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:22.023 could not add new controller: failed to write to nvme-fabrics device 00:13:22.023 23:09:44 -- common/autotest_common.sh@643 -- # es=1 00:13:22.023 23:09:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:22.023 23:09:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:22.023 23:09:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:22.023 23:09:44 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.023 23:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.023 23:09:44 -- common/autotest_common.sh@10 -- # set +x 00:13:22.023 23:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.023 23:09:44 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.935 23:09:46 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.935 23:09:46 -- common/autotest_common.sh@1177 -- # local i=0 00:13:23.935 23:09:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.935 23:09:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:23.935 23:09:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:25.888 23:09:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:25.888 23:09:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:25.888 23:09:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.888 23:09:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:25.888 23:09:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.888 23:09:48 -- common/autotest_common.sh@1187 -- # return 0 00:13:25.888 23:09:48 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.888 23:09:48 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.888 23:09:48 -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.888 23:09:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:25.888 23:09:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.888 23:09:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:25.888 23:09:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.888 23:09:48 -- common/autotest_common.sh@1210 -- # return 0 00:13:25.888 23:09:48 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.888 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.888 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.888 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.888 23:09:48 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.888 23:09:48 -- common/autotest_common.sh@640 -- # local es=0 00:13:25.888 23:09:48 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.888 23:09:48 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:25.888 23:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.888 23:09:48 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:25.888 23:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.888 23:09:48 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:25.888 23:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.888 23:09:48 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:25.888 23:09:48 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:25.888 23:09:48 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.888 [2024-06-07 23:09:48.318927] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:25.888 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:25.888 could not add new controller: failed to write to nvme-fabrics device 00:13:25.888 23:09:48 -- common/autotest_common.sh@643 -- # es=1 00:13:25.888 23:09:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:25.888 23:09:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:25.888 23:09:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:25.888 23:09:48 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:25.888 23:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.888 23:09:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.888 23:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.888 23:09:48 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.274 23:09:49 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.274 23:09:49 -- common/autotest_common.sh@1177 -- # local i=0 00:13:27.274 23:09:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.274 23:09:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:27.274 23:09:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:29.187 23:09:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:29.187 23:09:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:29.187 23:09:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.448 23:09:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:29.448 23:09:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.448 23:09:51 -- common/autotest_common.sh@1187 -- # return 0 00:13:29.448 23:09:51 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.448 23:09:51 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.448 23:09:51 -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.448 23:09:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:29.448 23:09:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.448 23:09:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:29.448 23:09:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.448 23:09:51 -- common/autotest_common.sh@1210 -- # return 0 00:13:29.448 23:09:51 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.448 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.448 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.448 23:09:51 -- target/rpc.sh@81 -- # seq 1 5 00:13:29.448 23:09:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.448 23:09:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.448 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.448 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 23:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.448 23:09:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.448 23:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.448 23:09:51 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 [2024-06-07 23:09:52.003054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.448 23:09:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.448 23:09:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.448 23:09:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.448 23:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 23:09:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.448 23:09:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.448 23:09:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.448 23:09:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 23:09:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.448 23:09:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.359 23:09:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.359 23:09:53 -- common/autotest_common.sh@1177 -- # local i=0 00:13:31.359 23:09:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.359 23:09:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:31.359 23:09:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:33.274 23:09:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:33.274 23:09:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:33.274 23:09:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.274 23:09:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:33.274 23:09:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.274 23:09:55 -- common/autotest_common.sh@1187 -- # return 0 00:13:33.274 23:09:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.274 23:09:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.274 23:09:55 -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.274 23:09:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:33.274 23:09:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.274 23:09:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:33.274 23:09:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.274 23:09:55 -- common/autotest_common.sh@1210 -- # return 0 00:13:33.274 23:09:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:33.274 23:09:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 [2024-06-07 23:09:55.697496] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:33.274 23:09:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.274 23:09:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 23:09:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.274 23:09:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.660 23:09:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.660 23:09:57 -- common/autotest_common.sh@1177 -- # local i=0 00:13:34.660 23:09:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.660 23:09:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:34.660 23:09:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:36.573 23:09:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:36.573 23:09:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:36.573 23:09:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.573 23:09:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:36.573 23:09:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.573 23:09:59 -- common/autotest_common.sh@1187 -- # return 0 00:13:36.573 23:09:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.834 23:09:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.834 23:09:59 -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.834 23:09:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:36.834 23:09:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.834 23:09:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:36.834 23:09:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.834 23:09:59 -- common/autotest_common.sh@1210 -- # return 0 00:13:36.834 23:09:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.834 23:09:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 [2024-06-07 23:09:59.366344] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.834 23:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.834 23:09:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.834 23:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.834 23:09:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.746 23:10:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.746 23:10:00 -- common/autotest_common.sh@1177 -- # local i=0 00:13:38.746 23:10:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.746 23:10:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:38.746 23:10:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:40.659 23:10:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:40.659 23:10:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:40.659 23:10:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.659 23:10:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:40.659 23:10:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.659 23:10:02 -- common/autotest_common.sh@1187 -- # return 0 00:13:40.659 23:10:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.659 23:10:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.659 23:10:02 -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.659 23:10:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:40.659 23:10:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.659 23:10:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:40.659 23:10:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.659 23:10:03 -- common/autotest_common.sh@1210 -- # return 0 00:13:40.659 23:10:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.659 23:10:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 [2024-06-07 23:10:03.058623] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.659 23:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.659 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.659 23:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.659 23:10:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.046 23:10:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.046 23:10:04 -- common/autotest_common.sh@1177 -- # local i=0 00:13:42.046 23:10:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.046 23:10:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:42.046 23:10:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:43.961 23:10:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:43.961 23:10:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:43.961 23:10:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.961 23:10:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:43.961 23:10:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.961 23:10:06 -- common/autotest_common.sh@1187 -- # return 0 00:13:43.961 23:10:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.223 23:10:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.223 23:10:06 -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.223 23:10:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:44.223 23:10:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.223 23:10:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:44.223 23:10:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.223 23:10:06 -- common/autotest_common.sh@1210 -- # return 0 00:13:44.223 23:10:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.223 23:10:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 [2024-06-07 23:10:06.764398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.223 23:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.223 23:10:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 23:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.223 23:10:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.610 23:10:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.610 23:10:08 -- common/autotest_common.sh@1177 -- # local i=0 00:13:45.610 23:10:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.610 23:10:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:45.610 23:10:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:48.155 23:10:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:48.155 23:10:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:48.155 23:10:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:48.155 23:10:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.155 23:10:10 -- common/autotest_common.sh@1187 -- # return 0 00:13:48.155 23:10:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.155 23:10:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.155 23:10:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:48.155 23:10:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:48.155 23:10:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@1210 -- # return 0 00:13:48.155 23:10:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@99 -- # seq 1 5 00:13:48.155 23:10:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.155 23:10:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 [2024-06-07 23:10:10.444328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.155 23:10:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.155 23:10:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.155 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.155 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.155 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 [2024-06-07 23:10:10.500438] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.156 23:10:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 [2024-06-07 23:10:10.564623] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.156 23:10:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 [2024-06-07 23:10:10.620785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.156 23:10:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 [2024-06-07 23:10:10.676976] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:48.156 23:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.156 23:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.156 23:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.156 23:10:10 -- target/rpc.sh@110 -- # stats='{ 00:13:48.156 "tick_rate": 2400000000, 00:13:48.156 "poll_groups": [ 00:13:48.156 { 00:13:48.156 "name": "nvmf_tgt_poll_group_0", 00:13:48.156 "admin_qpairs": 0, 00:13:48.156 "io_qpairs": 224, 00:13:48.156 "current_admin_qpairs": 0, 00:13:48.156 "current_io_qpairs": 0, 00:13:48.156 "pending_bdev_io": 0, 00:13:48.156 "completed_nvme_io": 423, 00:13:48.156 "transports": [ 00:13:48.156 { 00:13:48.156 "trtype": "TCP" 00:13:48.156 } 00:13:48.156 ] 00:13:48.156 }, 00:13:48.156 { 00:13:48.156 "name": "nvmf_tgt_poll_group_1", 00:13:48.156 "admin_qpairs": 1, 00:13:48.156 "io_qpairs": 223, 00:13:48.156 "current_admin_qpairs": 0, 00:13:48.156 "current_io_qpairs": 0, 00:13:48.156 "pending_bdev_io": 0, 00:13:48.156 "completed_nvme_io": 321, 00:13:48.156 "transports": [ 00:13:48.156 { 00:13:48.156 "trtype": "TCP" 00:13:48.156 } 00:13:48.156 ] 00:13:48.156 }, 00:13:48.156 { 00:13:48.156 "name": "nvmf_tgt_poll_group_2", 00:13:48.156 "admin_qpairs": 6, 00:13:48.156 "io_qpairs": 218, 00:13:48.156 "current_admin_qpairs": 0, 00:13:48.156 "current_io_qpairs": 0, 00:13:48.156 "pending_bdev_io": 0, 00:13:48.156 "completed_nvme_io": 219, 00:13:48.156 "transports": [ 00:13:48.156 { 00:13:48.156 "trtype": "TCP" 00:13:48.156 } 00:13:48.156 ] 00:13:48.156 }, 00:13:48.156 { 00:13:48.156 "name": "nvmf_tgt_poll_group_3", 00:13:48.156 "admin_qpairs": 0, 00:13:48.156 "io_qpairs": 224, 00:13:48.156 "current_admin_qpairs": 0, 00:13:48.156 "current_io_qpairs": 0, 00:13:48.156 "pending_bdev_io": 0, 00:13:48.156 "completed_nvme_io": 276, 00:13:48.156 "transports": [ 00:13:48.156 { 00:13:48.156 "trtype": "TCP" 00:13:48.156 } 00:13:48.156 ] 00:13:48.156 } 00:13:48.156 ] 00:13:48.156 }' 00:13:48.156 23:10:10 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.156 23:10:10 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:48.156 23:10:10 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:48.156 23:10:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.156 23:10:10 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:48.157 23:10:10 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:48.157 23:10:10 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:48.157 23:10:10 -- target/rpc.sh@123 -- # nvmftestfini 00:13:48.157 23:10:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:48.157 23:10:10 -- nvmf/common.sh@116 -- # sync 00:13:48.157 23:10:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:48.157 23:10:10 -- nvmf/common.sh@119 -- # set +e 00:13:48.157 23:10:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:48.157 23:10:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:48.157 rmmod nvme_tcp 00:13:48.417 rmmod nvme_fabrics 00:13:48.417 rmmod nvme_keyring 00:13:48.417 23:10:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:48.417 23:10:10 -- nvmf/common.sh@123 -- # set -e 00:13:48.417 23:10:10 -- nvmf/common.sh@124 -- # return 0 00:13:48.417 23:10:10 -- nvmf/common.sh@477 -- # '[' -n 2726108 ']' 00:13:48.417 23:10:10 -- nvmf/common.sh@478 -- # killprocess 2726108 00:13:48.417 23:10:10 -- common/autotest_common.sh@926 -- # '[' -z 2726108 ']' 00:13:48.417 23:10:10 -- common/autotest_common.sh@930 -- # kill -0 2726108 00:13:48.417 23:10:10 -- common/autotest_common.sh@931 -- # uname 00:13:48.417 23:10:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.417 23:10:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2726108 00:13:48.417 23:10:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:48.417 23:10:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:48.417 23:10:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2726108' 00:13:48.417 killing process with pid 2726108 00:13:48.417 23:10:10 -- common/autotest_common.sh@945 -- # kill 2726108 00:13:48.417 23:10:10 -- common/autotest_common.sh@950 -- # wait 2726108 00:13:48.417 23:10:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:48.417 23:10:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:48.417 23:10:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:48.417 23:10:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.417 23:10:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:48.417 23:10:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.417 23:10:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.417 23:10:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.963 23:10:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:50.963 00:13:50.963 real 0m37.037s 00:13:50.963 user 1m51.593s 00:13:50.963 sys 0m6.962s 00:13:50.963 23:10:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.963 23:10:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.963 ************************************ 00:13:50.963 END TEST nvmf_rpc 00:13:50.963 ************************************ 00:13:50.963 23:10:13 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.963 23:10:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:50.963 23:10:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.963 23:10:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.963 ************************************ 00:13:50.963 START TEST nvmf_invalid 00:13:50.963 ************************************ 00:13:50.963 23:10:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.963 * Looking for test storage... 00:13:50.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.963 23:10:13 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.963 23:10:13 -- nvmf/common.sh@7 -- # uname -s 00:13:50.964 23:10:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.964 23:10:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.964 23:10:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.964 23:10:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.964 23:10:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.964 23:10:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.964 23:10:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.964 23:10:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.964 23:10:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.964 23:10:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.964 23:10:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.964 23:10:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.964 23:10:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.964 23:10:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.964 23:10:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.964 23:10:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.964 23:10:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.964 23:10:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.964 23:10:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.964 23:10:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.964 23:10:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.964 23:10:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.964 23:10:13 -- paths/export.sh@5 -- # export PATH 00:13:50.964 23:10:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.964 23:10:13 -- nvmf/common.sh@46 -- # : 0 00:13:50.964 23:10:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.964 23:10:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.964 23:10:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.964 23:10:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.964 23:10:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.964 23:10:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.964 23:10:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.964 23:10:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.964 23:10:13 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:50.964 23:10:13 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.964 23:10:13 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:50.964 23:10:13 -- target/invalid.sh@14 -- # target=foobar 00:13:50.964 23:10:13 -- target/invalid.sh@16 -- # RANDOM=0 00:13:50.964 23:10:13 -- target/invalid.sh@34 -- # nvmftestinit 00:13:50.964 23:10:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.964 23:10:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.964 23:10:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.964 23:10:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.964 23:10:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.964 23:10:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.964 23:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.964 23:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.964 23:10:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:50.964 23:10:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:50.964 23:10:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:50.964 23:10:13 -- common/autotest_common.sh@10 -- # set +x 00:13:57.618 23:10:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:57.618 23:10:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:57.618 23:10:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:57.618 23:10:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:57.618 23:10:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:57.618 23:10:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:57.618 23:10:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:57.618 23:10:20 -- nvmf/common.sh@294 -- # net_devs=() 00:13:57.618 23:10:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:57.618 23:10:20 -- nvmf/common.sh@295 -- # e810=() 00:13:57.618 23:10:20 -- nvmf/common.sh@295 -- # local -ga e810 00:13:57.618 23:10:20 -- nvmf/common.sh@296 -- # x722=() 00:13:57.618 23:10:20 -- nvmf/common.sh@296 -- # local -ga x722 00:13:57.618 23:10:20 -- nvmf/common.sh@297 -- # mlx=() 00:13:57.618 23:10:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:57.618 23:10:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.618 23:10:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:57.618 23:10:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:57.618 23:10:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:57.618 23:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:57.618 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:57.618 23:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:57.618 23:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:57.618 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:57.618 23:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:57.618 23:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.618 23:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.618 23:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:57.618 Found net devices under 0000:31:00.0: cvl_0_0 00:13:57.618 23:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.618 23:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:57.618 23:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.618 23:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.618 23:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:57.618 Found net devices under 0000:31:00.1: cvl_0_1 00:13:57.618 23:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.618 23:10:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:57.618 23:10:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:57.618 23:10:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:57.618 23:10:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.618 23:10:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.618 23:10:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.618 23:10:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:57.618 23:10:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.618 23:10:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.618 23:10:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:57.618 23:10:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.618 23:10:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.618 23:10:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:57.618 23:10:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:57.618 23:10:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.618 23:10:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.879 23:10:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.879 23:10:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.879 23:10:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:57.879 23:10:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.879 23:10:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.879 23:10:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.879 23:10:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:57.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:13:57.879 00:13:57.879 --- 10.0.0.2 ping statistics --- 00:13:57.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.879 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:13:57.879 23:10:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:13:57.879 00:13:57.879 --- 10.0.0.1 ping statistics --- 00:13:57.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.879 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:57.879 23:10:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.879 23:10:20 -- nvmf/common.sh@410 -- # return 0 00:13:57.879 23:10:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:57.879 23:10:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.879 23:10:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:57.879 23:10:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:57.879 23:10:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.879 23:10:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:57.879 23:10:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:57.879 23:10:20 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:57.879 23:10:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:57.879 23:10:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:57.879 23:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:57.879 23:10:20 -- nvmf/common.sh@469 -- # nvmfpid=2735786 00:13:57.879 23:10:20 -- nvmf/common.sh@470 -- # waitforlisten 2735786 00:13:57.879 23:10:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.879 23:10:20 -- common/autotest_common.sh@819 -- # '[' -z 2735786 ']' 00:13:57.879 23:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.879 23:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:57.879 23:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.879 23:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:57.879 23:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:58.140 [2024-06-07 23:10:20.589450] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:58.140 [2024-06-07 23:10:20.589502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.140 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.140 [2024-06-07 23:10:20.658101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.140 [2024-06-07 23:10:20.690570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.140 [2024-06-07 23:10:20.690708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.140 [2024-06-07 23:10:20.690719] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.140 [2024-06-07 23:10:20.690728] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.140 [2024-06-07 23:10:20.690871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.140 [2024-06-07 23:10:20.690993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.140 [2024-06-07 23:10:20.691163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.140 [2024-06-07 23:10:20.691165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.712 23:10:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:58.712 23:10:21 -- common/autotest_common.sh@852 -- # return 0 00:13:58.712 23:10:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:58.712 23:10:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:58.712 23:10:21 -- common/autotest_common.sh@10 -- # set +x 00:13:58.973 23:10:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.973 23:10:21 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:58.973 23:10:21 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12960 00:13:58.973 [2024-06-07 23:10:21.539973] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:58.973 23:10:21 -- target/invalid.sh@40 -- # out='request: 00:13:58.973 { 00:13:58.973 "nqn": "nqn.2016-06.io.spdk:cnode12960", 00:13:58.973 "tgt_name": "foobar", 00:13:58.973 "method": "nvmf_create_subsystem", 00:13:58.973 "req_id": 1 00:13:58.973 } 00:13:58.973 Got JSON-RPC error response 00:13:58.973 response: 00:13:58.973 { 00:13:58.973 "code": -32603, 00:13:58.973 "message": "Unable to find target foobar" 00:13:58.973 }' 00:13:58.973 23:10:21 -- target/invalid.sh@41 -- # [[ request: 00:13:58.973 { 00:13:58.973 "nqn": "nqn.2016-06.io.spdk:cnode12960", 00:13:58.973 "tgt_name": "foobar", 00:13:58.973 "method": "nvmf_create_subsystem", 00:13:58.973 "req_id": 1 00:13:58.973 } 00:13:58.973 Got JSON-RPC error response 00:13:58.973 response: 00:13:58.973 { 00:13:58.973 "code": -32603, 00:13:58.973 "message": "Unable to find target foobar" 00:13:58.973 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:58.973 23:10:21 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:58.973 23:10:21 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18498 00:13:59.234 [2024-06-07 23:10:21.712584] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18498: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:59.234 23:10:21 -- target/invalid.sh@45 -- # out='request: 00:13:59.234 { 00:13:59.234 "nqn": "nqn.2016-06.io.spdk:cnode18498", 00:13:59.234 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:59.234 "method": "nvmf_create_subsystem", 00:13:59.234 "req_id": 1 00:13:59.234 } 00:13:59.234 Got JSON-RPC error response 00:13:59.234 response: 00:13:59.234 { 00:13:59.234 "code": -32602, 00:13:59.234 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:59.234 }' 00:13:59.234 23:10:21 -- target/invalid.sh@46 -- # [[ request: 00:13:59.234 { 00:13:59.234 "nqn": "nqn.2016-06.io.spdk:cnode18498", 00:13:59.234 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:59.234 "method": "nvmf_create_subsystem", 00:13:59.234 "req_id": 1 00:13:59.234 } 00:13:59.234 Got JSON-RPC error response 00:13:59.234 response: 00:13:59.234 { 00:13:59.234 "code": -32602, 00:13:59.234 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:59.234 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:59.234 23:10:21 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:59.234 23:10:21 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22648 00:13:59.234 [2024-06-07 23:10:21.881074] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22648: invalid model number 'SPDK_Controller' 00:13:59.234 23:10:21 -- target/invalid.sh@50 -- # out='request: 00:13:59.234 { 00:13:59.234 "nqn": "nqn.2016-06.io.spdk:cnode22648", 00:13:59.234 "model_number": "SPDK_Controller\u001f", 00:13:59.234 "method": "nvmf_create_subsystem", 00:13:59.235 "req_id": 1 00:13:59.235 } 00:13:59.235 Got JSON-RPC error response 00:13:59.235 response: 00:13:59.235 { 00:13:59.235 "code": -32602, 00:13:59.235 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.235 }' 00:13:59.235 23:10:21 -- target/invalid.sh@51 -- # [[ request: 00:13:59.235 { 00:13:59.235 "nqn": "nqn.2016-06.io.spdk:cnode22648", 00:13:59.235 "model_number": "SPDK_Controller\u001f", 00:13:59.235 "method": "nvmf_create_subsystem", 00:13:59.235 "req_id": 1 00:13:59.235 } 00:13:59.235 Got JSON-RPC error response 00:13:59.235 response: 00:13:59.235 { 00:13:59.235 "code": -32602, 00:13:59.235 "message": "Invalid MN SPDK_Controller\u001f" 00:13:59.235 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:59.235 23:10:21 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:59.235 23:10:21 -- target/invalid.sh@19 -- # local length=21 ll 00:13:59.495 23:10:21 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.495 23:10:21 -- target/invalid.sh@21 -- # local chars 00:13:59.495 23:10:21 -- target/invalid.sh@22 -- # local string 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 83 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=S 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 48 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=0 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 78 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=N 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 51 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=3 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 104 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=h 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.495 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # printf %x 79 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:59.495 23:10:21 -- target/invalid.sh@25 -- # string+=O 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 104 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # string+=h 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 117 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # string+=u 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 101 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # string+=e 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 94 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # string+='^' 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 113 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # string+=q 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:21 -- target/invalid.sh@25 -- # printf %x 88 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=X 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 100 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=d 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=o 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 127 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 108 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=l 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 85 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=U 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 53 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=5 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 81 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=Q 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 98 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=b 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # printf %x 110 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:59.496 23:10:22 -- target/invalid.sh@25 -- # string+=n 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.496 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.496 23:10:22 -- target/invalid.sh@28 -- # [[ S == \- ]] 00:13:59.496 23:10:22 -- target/invalid.sh@31 -- # echo 'S0N3hOhue^qXdolU5Qbn' 00:13:59.496 23:10:22 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'S0N3hOhue^qXdolU5Qbn' nqn.2016-06.io.spdk:cnode7712 00:13:59.757 [2024-06-07 23:10:22.206155] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7712: invalid serial number 'S0N3hOhue^qXdolU5Qbn' 00:13:59.757 23:10:22 -- target/invalid.sh@54 -- # out='request: 00:13:59.757 { 00:13:59.757 "nqn": "nqn.2016-06.io.spdk:cnode7712", 00:13:59.757 "serial_number": "S0N3hOhue^qXdo\u007flU5Qbn", 00:13:59.757 "method": "nvmf_create_subsystem", 00:13:59.757 "req_id": 1 00:13:59.757 } 00:13:59.757 Got JSON-RPC error response 00:13:59.757 response: 00:13:59.757 { 00:13:59.757 "code": -32602, 00:13:59.757 "message": "Invalid SN S0N3hOhue^qXdo\u007flU5Qbn" 00:13:59.757 }' 00:13:59.757 23:10:22 -- target/invalid.sh@55 -- # [[ request: 00:13:59.757 { 00:13:59.757 "nqn": "nqn.2016-06.io.spdk:cnode7712", 00:13:59.757 "serial_number": "S0N3hOhue^qXdo\u007flU5Qbn", 00:13:59.757 "method": "nvmf_create_subsystem", 00:13:59.757 "req_id": 1 00:13:59.757 } 00:13:59.757 Got JSON-RPC error response 00:13:59.757 response: 00:13:59.757 { 00:13:59.757 "code": -32602, 00:13:59.757 "message": "Invalid SN S0N3hOhue^qXdo\u007flU5Qbn" 00:13:59.757 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:59.757 23:10:22 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:59.757 23:10:22 -- target/invalid.sh@19 -- # local length=41 ll 00:13:59.757 23:10:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:59.757 23:10:22 -- target/invalid.sh@21 -- # local chars 00:13:59.757 23:10:22 -- target/invalid.sh@22 -- # local string 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 105 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=i 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 45 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=- 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 93 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=']' 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 104 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=h 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 62 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+='>' 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 95 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=_ 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # printf %x 100 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:59.757 23:10:22 -- target/invalid.sh@25 -- # string+=d 00:13:59.757 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 90 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=Z 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 90 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=Z 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 112 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=p 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 99 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=c 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 85 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=U 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 77 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=M 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 93 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=']' 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 33 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+='!' 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 66 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=B 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=o 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 86 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=V 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 52 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=4 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 101 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=e 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 87 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=W 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 82 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=R 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 89 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=Y 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 77 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=M 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 124 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+='|' 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 110 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # string+=n 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:59.758 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:59.758 23:10:22 -- target/invalid.sh@25 -- # printf %x 70 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=F 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 95 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=_ 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 39 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=\' 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 112 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=p 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 53 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=5 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 89 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=Y 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 72 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=H 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 78 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=N 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 126 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+='~' 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 41 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=')' 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 53 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=5 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 89 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=Y 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 72 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=H 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 89 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # string+=Y 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.019 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # printf %x 104 00:14:00.019 23:10:22 -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:00.020 23:10:22 -- target/invalid.sh@25 -- # string+=h 00:14:00.020 23:10:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.020 23:10:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.020 23:10:22 -- target/invalid.sh@28 -- # [[ i == \- ]] 00:14:00.020 23:10:22 -- target/invalid.sh@31 -- # echo 'i-]h>_dZZpcUM]!BoV4eWRYM|nF_'\''p5YHN~)5YHYh' 00:14:00.020 23:10:22 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'i-]h>_dZZpcUM]!BoV4eWRYM|nF_'\''p5YHN~)5YHYh' nqn.2016-06.io.spdk:cnode21196 00:14:00.020 [2024-06-07 23:10:22.679699] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21196: invalid model number 'i-]h>_dZZpcUM]!BoV4eWRYM|nF_'p5YHN~)5YHYh' 00:14:00.281 23:10:22 -- target/invalid.sh@58 -- # out='request: 00:14:00.281 { 00:14:00.281 "nqn": "nqn.2016-06.io.spdk:cnode21196", 00:14:00.281 "model_number": "i-]h>_dZZpcUM]!BoV4eWRYM|nF_'\''p5YHN~)5YHYh", 00:14:00.281 "method": "nvmf_create_subsystem", 00:14:00.281 "req_id": 1 00:14:00.281 } 00:14:00.281 Got JSON-RPC error response 00:14:00.281 response: 00:14:00.281 { 00:14:00.281 "code": -32602, 00:14:00.281 "message": "Invalid MN i-]h>_dZZpcUM]!BoV4eWRYM|nF_'\''p5YHN~)5YHYh" 00:14:00.281 }' 00:14:00.281 23:10:22 -- target/invalid.sh@59 -- # [[ request: 00:14:00.281 { 00:14:00.281 "nqn": "nqn.2016-06.io.spdk:cnode21196", 00:14:00.281 "model_number": "i-]h>_dZZpcUM]!BoV4eWRYM|nF_'p5YHN~)5YHYh", 00:14:00.281 "method": "nvmf_create_subsystem", 00:14:00.281 "req_id": 1 00:14:00.281 } 00:14:00.281 Got JSON-RPC error response 00:14:00.281 response: 00:14:00.281 { 00:14:00.281 "code": -32602, 00:14:00.281 "message": "Invalid MN i-]h>_dZZpcUM]!BoV4eWRYM|nF_'p5YHN~)5YHYh" 00:14:00.281 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:00.281 23:10:22 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:00.281 [2024-06-07 23:10:22.844323] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.281 23:10:22 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:00.543 23:10:23 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:00.543 23:10:23 -- target/invalid.sh@67 -- # echo '' 00:14:00.543 23:10:23 -- target/invalid.sh@67 -- # head -n 1 00:14:00.543 23:10:23 -- target/invalid.sh@67 -- # IP= 00:14:00.543 23:10:23 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:00.543 [2024-06-07 23:10:23.186918] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:00.543 23:10:23 -- target/invalid.sh@69 -- # out='request: 00:14:00.543 { 00:14:00.543 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.543 "listen_address": { 00:14:00.543 "trtype": "tcp", 00:14:00.543 "traddr": "", 00:14:00.543 "trsvcid": "4421" 00:14:00.543 }, 00:14:00.543 "method": "nvmf_subsystem_remove_listener", 00:14:00.543 "req_id": 1 00:14:00.543 } 00:14:00.543 Got JSON-RPC error response 00:14:00.543 response: 00:14:00.543 { 00:14:00.543 "code": -32602, 00:14:00.543 "message": "Invalid parameters" 00:14:00.543 }' 00:14:00.543 23:10:23 -- target/invalid.sh@70 -- # [[ request: 00:14:00.543 { 00:14:00.543 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:00.543 "listen_address": { 00:14:00.543 "trtype": "tcp", 00:14:00.543 "traddr": "", 00:14:00.543 "trsvcid": "4421" 00:14:00.543 }, 00:14:00.543 "method": "nvmf_subsystem_remove_listener", 00:14:00.543 "req_id": 1 00:14:00.543 } 00:14:00.543 Got JSON-RPC error response 00:14:00.543 response: 00:14:00.543 { 00:14:00.543 "code": -32602, 00:14:00.543 "message": "Invalid parameters" 00:14:00.543 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:00.543 23:10:23 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27063 -i 0 00:14:00.804 [2024-06-07 23:10:23.355450] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27063: invalid cntlid range [0-65519] 00:14:00.804 23:10:23 -- target/invalid.sh@73 -- # out='request: 00:14:00.804 { 00:14:00.804 "nqn": "nqn.2016-06.io.spdk:cnode27063", 00:14:00.804 "min_cntlid": 0, 00:14:00.804 "method": "nvmf_create_subsystem", 00:14:00.804 "req_id": 1 00:14:00.804 } 00:14:00.804 Got JSON-RPC error response 00:14:00.804 response: 00:14:00.804 { 00:14:00.804 "code": -32602, 00:14:00.804 "message": "Invalid cntlid range [0-65519]" 00:14:00.804 }' 00:14:00.804 23:10:23 -- target/invalid.sh@74 -- # [[ request: 00:14:00.804 { 00:14:00.804 "nqn": "nqn.2016-06.io.spdk:cnode27063", 00:14:00.804 "min_cntlid": 0, 00:14:00.804 "method": "nvmf_create_subsystem", 00:14:00.804 "req_id": 1 00:14:00.804 } 00:14:00.804 Got JSON-RPC error response 00:14:00.804 response: 00:14:00.804 { 00:14:00.804 "code": -32602, 00:14:00.804 "message": "Invalid cntlid range [0-65519]" 00:14:00.804 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:00.804 23:10:23 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2706 -i 65520 00:14:01.064 [2024-06-07 23:10:23.523989] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2706: invalid cntlid range [65520-65519] 00:14:01.064 23:10:23 -- target/invalid.sh@75 -- # out='request: 00:14:01.064 { 00:14:01.064 "nqn": "nqn.2016-06.io.spdk:cnode2706", 00:14:01.065 "min_cntlid": 65520, 00:14:01.065 "method": "nvmf_create_subsystem", 00:14:01.065 "req_id": 1 00:14:01.065 } 00:14:01.065 Got JSON-RPC error response 00:14:01.065 response: 00:14:01.065 { 00:14:01.065 "code": -32602, 00:14:01.065 "message": "Invalid cntlid range [65520-65519]" 00:14:01.065 }' 00:14:01.065 23:10:23 -- target/invalid.sh@76 -- # [[ request: 00:14:01.065 { 00:14:01.065 "nqn": "nqn.2016-06.io.spdk:cnode2706", 00:14:01.065 "min_cntlid": 65520, 00:14:01.065 "method": "nvmf_create_subsystem", 00:14:01.065 "req_id": 1 00:14:01.065 } 00:14:01.065 Got JSON-RPC error response 00:14:01.065 response: 00:14:01.065 { 00:14:01.065 "code": -32602, 00:14:01.065 "message": "Invalid cntlid range [65520-65519]" 00:14:01.065 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.065 23:10:23 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27179 -I 0 00:14:01.065 [2024-06-07 23:10:23.684495] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27179: invalid cntlid range [1-0] 00:14:01.065 23:10:23 -- target/invalid.sh@77 -- # out='request: 00:14:01.065 { 00:14:01.065 "nqn": "nqn.2016-06.io.spdk:cnode27179", 00:14:01.065 "max_cntlid": 0, 00:14:01.065 "method": "nvmf_create_subsystem", 00:14:01.065 "req_id": 1 00:14:01.065 } 00:14:01.065 Got JSON-RPC error response 00:14:01.065 response: 00:14:01.065 { 00:14:01.065 "code": -32602, 00:14:01.065 "message": "Invalid cntlid range [1-0]" 00:14:01.065 }' 00:14:01.065 23:10:23 -- target/invalid.sh@78 -- # [[ request: 00:14:01.065 { 00:14:01.065 "nqn": "nqn.2016-06.io.spdk:cnode27179", 00:14:01.065 "max_cntlid": 0, 00:14:01.065 "method": "nvmf_create_subsystem", 00:14:01.065 "req_id": 1 00:14:01.065 } 00:14:01.065 Got JSON-RPC error response 00:14:01.065 response: 00:14:01.065 { 00:14:01.065 "code": -32602, 00:14:01.065 "message": "Invalid cntlid range [1-0]" 00:14:01.065 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.065 23:10:23 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15300 -I 65520 00:14:01.411 [2024-06-07 23:10:23.845064] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15300: invalid cntlid range [1-65520] 00:14:01.411 23:10:23 -- target/invalid.sh@79 -- # out='request: 00:14:01.411 { 00:14:01.411 "nqn": "nqn.2016-06.io.spdk:cnode15300", 00:14:01.411 "max_cntlid": 65520, 00:14:01.411 "method": "nvmf_create_subsystem", 00:14:01.411 "req_id": 1 00:14:01.411 } 00:14:01.411 Got JSON-RPC error response 00:14:01.411 response: 00:14:01.411 { 00:14:01.411 "code": -32602, 00:14:01.411 "message": "Invalid cntlid range [1-65520]" 00:14:01.411 }' 00:14:01.411 23:10:23 -- target/invalid.sh@80 -- # [[ request: 00:14:01.411 { 00:14:01.411 "nqn": "nqn.2016-06.io.spdk:cnode15300", 00:14:01.411 "max_cntlid": 65520, 00:14:01.411 "method": "nvmf_create_subsystem", 00:14:01.411 "req_id": 1 00:14:01.411 } 00:14:01.411 Got JSON-RPC error response 00:14:01.411 response: 00:14:01.411 { 00:14:01.411 "code": -32602, 00:14:01.411 "message": "Invalid cntlid range [1-65520]" 00:14:01.411 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.411 23:10:23 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18775 -i 6 -I 5 00:14:01.411 [2024-06-07 23:10:24.017636] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18775: invalid cntlid range [6-5] 00:14:01.411 23:10:24 -- target/invalid.sh@83 -- # out='request: 00:14:01.411 { 00:14:01.411 "nqn": "nqn.2016-06.io.spdk:cnode18775", 00:14:01.411 "min_cntlid": 6, 00:14:01.411 "max_cntlid": 5, 00:14:01.411 "method": "nvmf_create_subsystem", 00:14:01.411 "req_id": 1 00:14:01.411 } 00:14:01.411 Got JSON-RPC error response 00:14:01.411 response: 00:14:01.411 { 00:14:01.411 "code": -32602, 00:14:01.411 "message": "Invalid cntlid range [6-5]" 00:14:01.412 }' 00:14:01.412 23:10:24 -- target/invalid.sh@84 -- # [[ request: 00:14:01.412 { 00:14:01.412 "nqn": "nqn.2016-06.io.spdk:cnode18775", 00:14:01.412 "min_cntlid": 6, 00:14:01.412 "max_cntlid": 5, 00:14:01.412 "method": "nvmf_create_subsystem", 00:14:01.412 "req_id": 1 00:14:01.412 } 00:14:01.412 Got JSON-RPC error response 00:14:01.412 response: 00:14:01.412 { 00:14:01.412 "code": -32602, 00:14:01.412 "message": "Invalid cntlid range [6-5]" 00:14:01.412 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:01.412 23:10:24 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:01.672 23:10:24 -- target/invalid.sh@87 -- # out='request: 00:14:01.672 { 00:14:01.672 "name": "foobar", 00:14:01.672 "method": "nvmf_delete_target", 00:14:01.672 "req_id": 1 00:14:01.672 } 00:14:01.672 Got JSON-RPC error response 00:14:01.672 response: 00:14:01.672 { 00:14:01.672 "code": -32602, 00:14:01.672 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:01.672 }' 00:14:01.672 23:10:24 -- target/invalid.sh@88 -- # [[ request: 00:14:01.672 { 00:14:01.672 "name": "foobar", 00:14:01.672 "method": "nvmf_delete_target", 00:14:01.672 "req_id": 1 00:14:01.672 } 00:14:01.672 Got JSON-RPC error response 00:14:01.672 response: 00:14:01.672 { 00:14:01.672 "code": -32602, 00:14:01.672 "message": "The specified target doesn't exist, cannot delete it." 00:14:01.672 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:01.672 23:10:24 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:01.672 23:10:24 -- target/invalid.sh@91 -- # nvmftestfini 00:14:01.672 23:10:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:01.672 23:10:24 -- nvmf/common.sh@116 -- # sync 00:14:01.672 23:10:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:01.672 23:10:24 -- nvmf/common.sh@119 -- # set +e 00:14:01.672 23:10:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:01.672 23:10:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:01.672 rmmod nvme_tcp 00:14:01.672 rmmod nvme_fabrics 00:14:01.672 rmmod nvme_keyring 00:14:01.672 23:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:01.672 23:10:24 -- nvmf/common.sh@123 -- # set -e 00:14:01.672 23:10:24 -- nvmf/common.sh@124 -- # return 0 00:14:01.672 23:10:24 -- nvmf/common.sh@477 -- # '[' -n 2735786 ']' 00:14:01.672 23:10:24 -- nvmf/common.sh@478 -- # killprocess 2735786 00:14:01.672 23:10:24 -- common/autotest_common.sh@926 -- # '[' -z 2735786 ']' 00:14:01.672 23:10:24 -- common/autotest_common.sh@930 -- # kill -0 2735786 00:14:01.672 23:10:24 -- common/autotest_common.sh@931 -- # uname 00:14:01.672 23:10:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:01.672 23:10:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2735786 00:14:01.672 23:10:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:01.672 23:10:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:01.672 23:10:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2735786' 00:14:01.672 killing process with pid 2735786 00:14:01.672 23:10:24 -- common/autotest_common.sh@945 -- # kill 2735786 00:14:01.672 23:10:24 -- common/autotest_common.sh@950 -- # wait 2735786 00:14:01.933 23:10:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:01.933 23:10:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:01.933 23:10:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:01.933 23:10:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.933 23:10:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:01.933 23:10:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.933 23:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.933 23:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.848 23:10:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:03.848 00:14:03.848 real 0m13.293s 00:14:03.848 user 0m18.940s 00:14:03.848 sys 0m6.242s 00:14:03.848 23:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.848 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:03.848 ************************************ 00:14:03.848 END TEST nvmf_invalid 00:14:03.848 ************************************ 00:14:03.848 23:10:26 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:03.848 23:10:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:03.848 23:10:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.848 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:03.848 ************************************ 00:14:03.848 START TEST nvmf_abort 00:14:03.848 ************************************ 00:14:03.848 23:10:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:04.110 * Looking for test storage... 00:14:04.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.110 23:10:26 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.110 23:10:26 -- nvmf/common.sh@7 -- # uname -s 00:14:04.110 23:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.110 23:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.110 23:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.110 23:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.110 23:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.110 23:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.110 23:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.110 23:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.110 23:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.110 23:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.110 23:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:04.110 23:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:04.110 23:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.110 23:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.110 23:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.110 23:10:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.110 23:10:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.110 23:10:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.110 23:10:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.110 23:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.110 23:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.110 23:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.110 23:10:26 -- paths/export.sh@5 -- # export PATH 00:14:04.110 23:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.110 23:10:26 -- nvmf/common.sh@46 -- # : 0 00:14:04.110 23:10:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:04.110 23:10:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:04.110 23:10:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:04.110 23:10:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.110 23:10:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.110 23:10:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:04.110 23:10:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:04.110 23:10:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:04.110 23:10:26 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.110 23:10:26 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:04.110 23:10:26 -- target/abort.sh@14 -- # nvmftestinit 00:14:04.110 23:10:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:04.110 23:10:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.110 23:10:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:04.110 23:10:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:04.110 23:10:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:04.110 23:10:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.110 23:10:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.110 23:10:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.110 23:10:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:04.110 23:10:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:04.110 23:10:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:04.110 23:10:26 -- common/autotest_common.sh@10 -- # set +x 00:14:12.255 23:10:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:12.255 23:10:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:12.255 23:10:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:12.255 23:10:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:12.255 23:10:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:12.256 23:10:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:12.256 23:10:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:12.256 23:10:33 -- nvmf/common.sh@294 -- # net_devs=() 00:14:12.256 23:10:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:12.256 23:10:33 -- nvmf/common.sh@295 -- # e810=() 00:14:12.256 23:10:33 -- nvmf/common.sh@295 -- # local -ga e810 00:14:12.256 23:10:33 -- nvmf/common.sh@296 -- # x722=() 00:14:12.256 23:10:33 -- nvmf/common.sh@296 -- # local -ga x722 00:14:12.256 23:10:33 -- nvmf/common.sh@297 -- # mlx=() 00:14:12.256 23:10:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:12.256 23:10:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.256 23:10:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:12.256 23:10:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:12.256 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:12.256 23:10:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:12.256 23:10:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:12.256 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:12.256 23:10:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:12.256 23:10:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.256 23:10:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.256 23:10:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:12.256 Found net devices under 0000:31:00.0: cvl_0_0 00:14:12.256 23:10:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:12.256 23:10:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.256 23:10:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.256 23:10:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:12.256 Found net devices under 0000:31:00.1: cvl_0_1 00:14:12.256 23:10:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:12.256 23:10:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:12.256 23:10:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.256 23:10:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.256 23:10:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:12.256 23:10:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.256 23:10:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.256 23:10:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:12.256 23:10:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.256 23:10:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.256 23:10:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:12.256 23:10:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:12.256 23:10:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.256 23:10:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.256 23:10:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.256 23:10:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.256 23:10:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:12.256 23:10:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.256 23:10:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.256 23:10:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.256 23:10:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:12.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:14:12.256 00:14:12.256 --- 10.0.0.2 ping statistics --- 00:14:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.256 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:14:12.256 23:10:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:14:12.256 00:14:12.256 --- 10.0.0.1 ping statistics --- 00:14:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.256 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:14:12.256 23:10:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.256 23:10:33 -- nvmf/common.sh@410 -- # return 0 00:14:12.256 23:10:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:12.256 23:10:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.256 23:10:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:12.256 23:10:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.256 23:10:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:12.256 23:10:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:12.256 23:10:33 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:12.256 23:10:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:12.256 23:10:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:12.256 23:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:12.256 23:10:33 -- nvmf/common.sh@469 -- # nvmfpid=2741047 00:14:12.256 23:10:33 -- nvmf/common.sh@470 -- # waitforlisten 2741047 00:14:12.256 23:10:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:12.256 23:10:33 -- common/autotest_common.sh@819 -- # '[' -z 2741047 ']' 00:14:12.256 23:10:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.256 23:10:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:12.256 23:10:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.256 23:10:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:12.256 23:10:33 -- common/autotest_common.sh@10 -- # set +x 00:14:12.256 [2024-06-07 23:10:34.021903] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:12.256 [2024-06-07 23:10:34.021965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.256 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.256 [2024-06-07 23:10:34.111076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.256 [2024-06-07 23:10:34.156551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.256 [2024-06-07 23:10:34.156709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.256 [2024-06-07 23:10:34.156721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.256 [2024-06-07 23:10:34.156730] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.256 [2024-06-07 23:10:34.156871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.256 [2024-06-07 23:10:34.157032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.256 [2024-06-07 23:10:34.157033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.256 23:10:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.256 23:10:34 -- common/autotest_common.sh@852 -- # return 0 00:14:12.256 23:10:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.256 23:10:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 23:10:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.257 23:10:34 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 [2024-06-07 23:10:34.844789] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.257 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.257 23:10:34 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 Malloc0 00:14:12.257 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.257 23:10:34 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 Delay0 00:14:12.257 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.257 23:10:34 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.257 23:10:34 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.257 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.257 23:10:34 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:12.257 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.257 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.517 [2024-06-07 23:10:34.940324] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.517 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.517 23:10:34 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.517 23:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.517 23:10:34 -- common/autotest_common.sh@10 -- # set +x 00:14:12.517 23:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.517 23:10:34 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:12.517 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.517 [2024-06-07 23:10:35.092442] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:15.062 Initializing NVMe Controllers 00:14:15.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:15.062 controller IO queue size 128 less than required 00:14:15.062 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:15.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:15.062 Initialization complete. Launching workers. 00:14:15.062 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33034 00:14:15.062 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33095, failed to submit 62 00:14:15.062 success 33034, unsuccess 61, failed 0 00:14:15.062 23:10:37 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:15.062 23:10:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.062 23:10:37 -- common/autotest_common.sh@10 -- # set +x 00:14:15.062 23:10:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.062 23:10:37 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:15.062 23:10:37 -- target/abort.sh@38 -- # nvmftestfini 00:14:15.062 23:10:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.062 23:10:37 -- nvmf/common.sh@116 -- # sync 00:14:15.062 23:10:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.062 23:10:37 -- nvmf/common.sh@119 -- # set +e 00:14:15.062 23:10:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.062 23:10:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.062 rmmod nvme_tcp 00:14:15.062 rmmod nvme_fabrics 00:14:15.062 rmmod nvme_keyring 00:14:15.062 23:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.062 23:10:37 -- nvmf/common.sh@123 -- # set -e 00:14:15.062 23:10:37 -- nvmf/common.sh@124 -- # return 0 00:14:15.062 23:10:37 -- nvmf/common.sh@477 -- # '[' -n 2741047 ']' 00:14:15.062 23:10:37 -- nvmf/common.sh@478 -- # killprocess 2741047 00:14:15.062 23:10:37 -- common/autotest_common.sh@926 -- # '[' -z 2741047 ']' 00:14:15.062 23:10:37 -- common/autotest_common.sh@930 -- # kill -0 2741047 00:14:15.062 23:10:37 -- common/autotest_common.sh@931 -- # uname 00:14:15.062 23:10:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.062 23:10:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2741047 00:14:15.062 23:10:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:15.062 23:10:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:15.062 23:10:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2741047' 00:14:15.062 killing process with pid 2741047 00:14:15.062 23:10:37 -- common/autotest_common.sh@945 -- # kill 2741047 00:14:15.062 23:10:37 -- common/autotest_common.sh@950 -- # wait 2741047 00:14:15.062 23:10:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:15.062 23:10:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:15.062 23:10:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:15.062 23:10:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.062 23:10:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:15.062 23:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.062 23:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.062 23:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.976 23:10:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:16.976 00:14:16.976 real 0m13.012s 00:14:16.976 user 0m13.735s 00:14:16.976 sys 0m6.285s 00:14:16.976 23:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.976 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.976 ************************************ 00:14:16.976 END TEST nvmf_abort 00:14:16.976 ************************************ 00:14:16.976 23:10:39 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:16.976 23:10:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:16.976 23:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:16.976 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:16.976 ************************************ 00:14:16.976 START TEST nvmf_ns_hotplug_stress 00:14:16.976 ************************************ 00:14:16.976 23:10:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:17.237 * Looking for test storage... 00:14:17.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.237 23:10:39 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.237 23:10:39 -- nvmf/common.sh@7 -- # uname -s 00:14:17.237 23:10:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.237 23:10:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.237 23:10:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.237 23:10:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.237 23:10:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.237 23:10:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.237 23:10:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.237 23:10:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.237 23:10:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.237 23:10:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.237 23:10:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.237 23:10:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:17.237 23:10:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.237 23:10:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.237 23:10:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.237 23:10:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.237 23:10:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.237 23:10:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.237 23:10:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.237 23:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.237 23:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.237 23:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.237 23:10:39 -- paths/export.sh@5 -- # export PATH 00:14:17.237 23:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.237 23:10:39 -- nvmf/common.sh@46 -- # : 0 00:14:17.237 23:10:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.237 23:10:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.237 23:10:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.237 23:10:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.237 23:10:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.237 23:10:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.237 23:10:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.237 23:10:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.237 23:10:39 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.237 23:10:39 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:17.237 23:10:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.237 23:10:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.237 23:10:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.237 23:10:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.237 23:10:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.237 23:10:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.237 23:10:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.237 23:10:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.237 23:10:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:17.237 23:10:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:17.237 23:10:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:17.237 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:14:25.382 23:10:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:25.382 23:10:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:25.382 23:10:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:25.382 23:10:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:25.382 23:10:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:25.382 23:10:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:25.382 23:10:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:25.382 23:10:46 -- nvmf/common.sh@294 -- # net_devs=() 00:14:25.382 23:10:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:25.382 23:10:46 -- nvmf/common.sh@295 -- # e810=() 00:14:25.382 23:10:46 -- nvmf/common.sh@295 -- # local -ga e810 00:14:25.382 23:10:46 -- nvmf/common.sh@296 -- # x722=() 00:14:25.382 23:10:46 -- nvmf/common.sh@296 -- # local -ga x722 00:14:25.382 23:10:46 -- nvmf/common.sh@297 -- # mlx=() 00:14:25.382 23:10:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:25.382 23:10:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.382 23:10:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:25.382 23:10:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:25.382 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:25.382 23:10:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:25.382 23:10:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:25.382 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:25.382 23:10:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:25.382 23:10:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.382 23:10:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.382 23:10:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:25.382 Found net devices under 0000:31:00.0: cvl_0_0 00:14:25.382 23:10:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:25.382 23:10:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.382 23:10:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.382 23:10:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:25.382 Found net devices under 0000:31:00.1: cvl_0_1 00:14:25.382 23:10:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:25.382 23:10:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:25.382 23:10:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.382 23:10:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.382 23:10:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:25.382 23:10:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.382 23:10:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.382 23:10:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:25.382 23:10:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.382 23:10:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.382 23:10:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:25.382 23:10:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:25.382 23:10:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.382 23:10:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.382 23:10:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.382 23:10:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.382 23:10:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:25.382 23:10:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.382 23:10:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.382 23:10:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.382 23:10:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:25.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:14:25.382 00:14:25.382 --- 10.0.0.2 ping statistics --- 00:14:25.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.382 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:14:25.382 23:10:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:14:25.382 00:14:25.382 --- 10.0.0.1 ping statistics --- 00:14:25.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.382 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:14:25.382 23:10:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.382 23:10:46 -- nvmf/common.sh@410 -- # return 0 00:14:25.382 23:10:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:25.382 23:10:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.382 23:10:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:25.382 23:10:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.382 23:10:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:25.382 23:10:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:25.383 23:10:46 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:25.383 23:10:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:25.383 23:10:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:25.383 23:10:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.383 23:10:46 -- nvmf/common.sh@469 -- # nvmfpid=2745879 00:14:25.383 23:10:46 -- nvmf/common.sh@470 -- # waitforlisten 2745879 00:14:25.383 23:10:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:25.383 23:10:46 -- common/autotest_common.sh@819 -- # '[' -z 2745879 ']' 00:14:25.383 23:10:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.383 23:10:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:25.383 23:10:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.383 23:10:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:25.383 23:10:46 -- common/autotest_common.sh@10 -- # set +x 00:14:25.383 [2024-06-07 23:10:47.034522] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:25.383 [2024-06-07 23:10:47.034575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.383 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.383 [2024-06-07 23:10:47.119018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.383 [2024-06-07 23:10:47.159081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:25.383 [2024-06-07 23:10:47.159259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.383 [2024-06-07 23:10:47.159272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.383 [2024-06-07 23:10:47.159281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.383 [2024-06-07 23:10:47.159457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.383 [2024-06-07 23:10:47.159617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.383 [2024-06-07 23:10:47.159618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.383 23:10:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:25.383 23:10:47 -- common/autotest_common.sh@852 -- # return 0 00:14:25.383 23:10:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:25.383 23:10:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:25.383 23:10:47 -- common/autotest_common.sh@10 -- # set +x 00:14:25.383 23:10:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.383 23:10:47 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:25.383 23:10:47 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.383 [2024-06-07 23:10:47.977402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.383 23:10:48 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.643 23:10:48 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.644 [2024-06-07 23:10:48.298857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.904 23:10:48 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.904 23:10:48 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:26.165 Malloc0 00:14:26.165 23:10:48 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:26.165 Delay0 00:14:26.165 23:10:48 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.426 23:10:48 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:26.687 NULL1 00:14:26.687 23:10:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:26.687 23:10:49 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2746520 00:14:26.687 23:10:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:26.687 23:10:49 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:26.687 23:10:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.687 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.949 23:10:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.210 23:10:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:27.210 23:10:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:27.210 [2024-06-07 23:10:49.787419] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:14:27.210 true 00:14:27.210 23:10:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:27.210 23:10:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.472 23:10:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.472 23:10:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:27.472 23:10:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:27.733 true 00:14:27.733 23:10:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:27.733 23:10:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.994 23:10:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.994 23:10:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:27.994 23:10:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:28.254 true 00:14:28.254 23:10:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:28.254 23:10:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.513 23:10:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.513 23:10:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:28.513 23:10:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:28.773 true 00:14:28.773 23:10:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:28.773 23:10:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.033 23:10:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.033 23:10:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:29.033 23:10:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:29.293 true 00:14:29.293 23:10:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:29.293 23:10:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.293 23:10:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.554 23:10:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:29.554 23:10:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:29.815 true 00:14:29.815 23:10:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:29.815 23:10:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.815 23:10:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.076 23:10:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:30.076 23:10:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:30.336 true 00:14:30.336 23:10:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:30.336 23:10:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.336 23:10:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.599 23:10:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:30.599 23:10:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:30.599 true 00:14:30.862 23:10:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:30.862 23:10:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.862 23:10:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.123 23:10:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:31.123 23:10:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:31.123 true 00:14:31.123 23:10:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:31.123 23:10:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.384 23:10:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.645 23:10:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:31.645 23:10:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:31.645 true 00:14:31.645 23:10:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:31.645 23:10:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.906 23:10:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.906 23:10:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:31.906 23:10:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:32.167 true 00:14:32.167 23:10:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:32.167 23:10:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.428 23:10:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.428 23:10:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:32.428 23:10:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:32.689 true 00:14:32.689 23:10:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:32.689 23:10:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.950 23:10:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.950 23:10:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:32.950 23:10:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:33.211 true 00:14:33.211 23:10:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:33.211 23:10:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.211 23:10:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.471 23:10:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:33.471 23:10:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:33.731 true 00:14:33.731 23:10:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:33.731 23:10:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.731 23:10:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.993 23:10:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:33.993 23:10:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:34.254 true 00:14:34.254 23:10:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:34.254 23:10:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.254 23:10:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.515 23:10:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:34.515 23:10:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:34.515 true 00:14:34.515 23:10:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:34.515 23:10:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.776 23:10:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.037 23:10:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:35.037 23:10:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:35.037 true 00:14:35.037 23:10:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:35.037 23:10:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.297 23:10:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.557 23:10:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:35.557 23:10:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:35.557 true 00:14:35.557 23:10:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:35.557 23:10:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.817 23:10:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.817 23:10:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:35.817 23:10:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:36.078 true 00:14:36.078 23:10:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:36.078 23:10:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.339 23:10:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.339 23:10:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:36.339 23:10:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:36.600 true 00:14:36.600 23:10:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:36.600 23:10:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.860 23:10:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.860 23:10:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:36.860 23:10:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:37.121 true 00:14:37.121 23:10:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:37.121 23:10:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.121 23:10:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.381 23:10:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:37.381 23:10:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:37.642 true 00:14:37.642 23:11:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:37.642 23:11:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.642 23:11:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.903 23:11:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:37.903 23:11:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:37.903 true 00:14:38.164 23:11:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:38.164 23:11:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.164 23:11:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.425 23:11:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:38.425 23:11:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:38.425 true 00:14:38.425 23:11:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:38.425 23:11:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.686 23:11:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.947 23:11:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:38.947 23:11:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:38.947 true 00:14:38.947 23:11:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:38.947 23:11:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.207 23:11:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.468 23:11:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:39.468 23:11:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:39.468 true 00:14:39.468 23:11:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:39.469 23:11:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.730 23:11:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.730 23:11:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:39.730 23:11:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:39.991 true 00:14:39.991 23:11:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:39.991 23:11:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.252 23:11:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.252 23:11:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:40.252 23:11:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:40.513 true 00:14:40.513 23:11:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:40.513 23:11:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.772 23:11:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.772 23:11:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:40.772 23:11:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:41.032 true 00:14:41.032 23:11:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:41.032 23:11:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.032 23:11:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.291 23:11:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:41.291 23:11:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:41.552 true 00:14:41.552 23:11:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:41.552 23:11:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.552 23:11:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.812 23:11:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:41.812 23:11:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:42.073 true 00:14:42.073 23:11:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:42.073 23:11:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.073 23:11:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.334 23:11:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:42.334 23:11:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:42.334 true 00:14:42.595 23:11:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:42.595 23:11:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.595 23:11:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.857 23:11:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:42.857 23:11:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:42.857 true 00:14:42.857 23:11:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:42.857 23:11:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.117 23:11:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.378 23:11:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:43.378 23:11:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:43.378 true 00:14:43.378 23:11:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:43.378 23:11:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.638 23:11:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.638 23:11:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:43.638 23:11:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:43.899 true 00:14:43.899 23:11:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:43.899 23:11:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.159 23:11:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.159 23:11:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:44.159 23:11:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:44.420 true 00:14:44.420 23:11:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:44.420 23:11:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.681 23:11:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.681 23:11:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:44.681 23:11:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:44.942 true 00:14:44.942 23:11:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:44.942 23:11:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.942 23:11:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.202 23:11:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:45.202 23:11:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:45.463 true 00:14:45.463 23:11:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:45.463 23:11:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.463 23:11:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.723 23:11:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:45.723 23:11:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:45.723 true 00:14:45.983 23:11:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:45.983 23:11:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.983 23:11:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.242 23:11:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:46.243 23:11:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:46.243 true 00:14:46.243 23:11:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:46.243 23:11:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.503 23:11:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.763 23:11:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:46.763 23:11:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:46.763 true 00:14:46.763 23:11:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:46.763 23:11:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.023 23:11:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.283 23:11:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:47.283 23:11:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:47.283 true 00:14:47.283 23:11:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:47.283 23:11:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.542 23:11:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.542 23:11:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:47.542 23:11:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:47.802 true 00:14:47.802 23:11:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:47.802 23:11:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.062 23:11:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.062 23:11:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:48.062 23:11:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:48.322 true 00:14:48.322 23:11:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:48.322 23:11:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.583 23:11:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.583 23:11:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:48.583 23:11:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:48.843 true 00:14:48.843 23:11:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:48.843 23:11:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.843 23:11:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.104 23:11:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:49.104 23:11:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:49.365 true 00:14:49.365 23:11:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:49.365 23:11:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.365 23:11:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.625 23:11:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:49.625 23:11:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:49.886 true 00:14:49.886 23:11:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:49.886 23:11:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.886 23:11:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.147 23:11:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:50.147 23:11:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:50.147 true 00:14:50.408 23:11:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:50.408 23:11:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.408 23:11:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.668 23:11:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:50.668 23:11:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:50.668 true 00:14:50.668 23:11:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:50.668 23:11:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.927 23:11:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.235 23:11:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:51.235 23:11:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:51.235 true 00:14:51.235 23:11:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:51.235 23:11:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.522 23:11:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.522 23:11:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:51.522 23:11:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:51.822 true 00:14:51.822 23:11:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:51.822 23:11:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.822 23:11:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.104 23:11:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:52.104 23:11:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:52.104 true 00:14:52.365 23:11:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:52.366 23:11:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.366 23:11:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.626 23:11:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:52.626 23:11:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:52.626 true 00:14:52.626 23:11:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:52.626 23:11:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.888 23:11:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.149 23:11:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:53.149 23:11:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:53.149 true 00:14:53.149 23:11:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:53.149 23:11:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.410 23:11:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.410 23:11:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:53.410 23:11:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:53.671 true 00:14:53.671 23:11:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:53.671 23:11:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.931 23:11:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.931 23:11:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:53.931 23:11:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:54.193 true 00:14:54.193 23:11:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:54.193 23:11:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.193 23:11:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.453 23:11:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:54.453 23:11:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:54.714 true 00:14:54.714 23:11:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:54.714 23:11:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.714 23:11:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.974 23:11:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:54.974 23:11:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:54.974 true 00:14:55.235 23:11:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:55.235 23:11:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.235 23:11:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.496 23:11:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:55.496 23:11:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:55.496 true 00:14:55.496 23:11:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:55.496 23:11:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.756 23:11:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.017 23:11:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:56.017 23:11:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:56.017 true 00:14:56.017 23:11:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:56.017 23:11:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.277 23:11:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.277 23:11:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:56.277 23:11:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:56.538 true 00:14:56.538 23:11:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:56.538 23:11:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.799 23:11:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.799 23:11:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:14:56.799 23:11:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:57.060 Initializing NVMe Controllers 00:14:57.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.060 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:14:57.060 Controller IO queue size 128, less than required. 00:14:57.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:57.060 WARNING: Some requested NVMe devices were skipped 00:14:57.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:57.060 Initialization complete. Launching workers. 00:14:57.060 ======================================================== 00:14:57.060 Latency(us) 00:14:57.060 Device Information : IOPS MiB/s Average min max 00:14:57.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33252.46 16.24 3849.32 1488.70 10573.07 00:14:57.060 ======================================================== 00:14:57.060 Total : 33252.46 16.24 3849.32 1488.70 10573.07 00:14:57.060 00:14:57.060 true 00:14:57.060 23:11:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746520 00:14:57.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2746520) - No such process 00:14:57.060 23:11:19 -- target/ns_hotplug_stress.sh@53 -- # wait 2746520 00:14:57.060 23:11:19 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.322 23:11:19 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:57.582 null0 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:57.582 null1 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.582 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:57.843 null2 00:14:57.843 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:57.843 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:57.843 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:58.103 null3 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:58.103 null4 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.103 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:58.363 null5 00:14:58.363 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.363 23:11:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.363 23:11:20 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:58.363 null6 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:58.625 null7 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@66 -- # wait 2752846 2752848 2752851 2752853 2752856 2752858 2752859 2752862 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:58.625 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:58.626 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.626 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.886 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.887 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.148 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:21 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.409 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.670 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.931 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.192 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.193 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.453 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.454 23:11:22 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.454 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.714 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.974 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.235 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:01.496 23:11:23 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:01.496 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.757 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:01.757 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.757 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.757 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.758 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.018 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.280 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.280 23:11:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.280 23:11:24 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:02.280 23:11:24 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:02.280 23:11:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.280 23:11:24 -- nvmf/common.sh@116 -- # sync 00:15:02.280 23:11:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.280 23:11:24 -- nvmf/common.sh@119 -- # set +e 00:15:02.280 23:11:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.280 23:11:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.280 rmmod nvme_tcp 00:15:02.280 rmmod nvme_fabrics 00:15:02.280 rmmod nvme_keyring 00:15:02.280 23:11:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.280 23:11:24 -- nvmf/common.sh@123 -- # set -e 00:15:02.280 23:11:24 -- nvmf/common.sh@124 -- # return 0 00:15:02.280 23:11:24 -- nvmf/common.sh@477 -- # '[' -n 2745879 ']' 00:15:02.280 23:11:24 -- nvmf/common.sh@478 -- # killprocess 2745879 00:15:02.280 23:11:24 -- common/autotest_common.sh@926 -- # '[' -z 2745879 ']' 00:15:02.280 23:11:24 -- common/autotest_common.sh@930 -- # kill -0 2745879 00:15:02.280 23:11:24 -- common/autotest_common.sh@931 -- # uname 00:15:02.280 23:11:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:02.280 23:11:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2745879 00:15:02.280 23:11:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:02.280 23:11:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:02.280 23:11:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2745879' 00:15:02.280 killing process with pid 2745879 00:15:02.280 23:11:24 -- common/autotest_common.sh@945 -- # kill 2745879 00:15:02.280 23:11:24 -- common/autotest_common.sh@950 -- # wait 2745879 00:15:02.541 23:11:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.541 23:11:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.541 23:11:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.541 23:11:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.541 23:11:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.541 23:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.541 23:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.541 23:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.457 23:11:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.457 00:15:04.457 real 0m47.546s 00:15:04.457 user 3m13.082s 00:15:04.457 sys 0m16.386s 00:15:04.457 23:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.457 23:11:27 -- common/autotest_common.sh@10 -- # set +x 00:15:04.457 ************************************ 00:15:04.457 END TEST nvmf_ns_hotplug_stress 00:15:04.457 ************************************ 00:15:04.718 23:11:27 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.718 23:11:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.718 23:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.718 23:11:27 -- common/autotest_common.sh@10 -- # set +x 00:15:04.718 ************************************ 00:15:04.718 START TEST nvmf_connect_stress 00:15:04.718 ************************************ 00:15:04.718 23:11:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.718 * Looking for test storage... 00:15:04.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.718 23:11:27 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.719 23:11:27 -- nvmf/common.sh@7 -- # uname -s 00:15:04.719 23:11:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.719 23:11:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.719 23:11:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.719 23:11:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.719 23:11:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.719 23:11:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.719 23:11:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.719 23:11:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.719 23:11:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.719 23:11:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.719 23:11:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.719 23:11:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:04.719 23:11:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.719 23:11:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.719 23:11:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.719 23:11:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.719 23:11:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.719 23:11:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.719 23:11:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.719 23:11:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.719 23:11:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.719 23:11:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.719 23:11:27 -- paths/export.sh@5 -- # export PATH 00:15:04.719 23:11:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.719 23:11:27 -- nvmf/common.sh@46 -- # : 0 00:15:04.719 23:11:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.719 23:11:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.719 23:11:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.719 23:11:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.719 23:11:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.719 23:11:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.719 23:11:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.719 23:11:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.719 23:11:27 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:04.719 23:11:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.719 23:11:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.719 23:11:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.719 23:11:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.719 23:11:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.719 23:11:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.719 23:11:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.719 23:11:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.719 23:11:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.719 23:11:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.719 23:11:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.719 23:11:27 -- common/autotest_common.sh@10 -- # set +x 00:15:12.863 23:11:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:12.863 23:11:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:12.863 23:11:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:12.863 23:11:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:12.863 23:11:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:12.863 23:11:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:12.863 23:11:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:12.863 23:11:34 -- nvmf/common.sh@294 -- # net_devs=() 00:15:12.863 23:11:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:12.863 23:11:34 -- nvmf/common.sh@295 -- # e810=() 00:15:12.863 23:11:34 -- nvmf/common.sh@295 -- # local -ga e810 00:15:12.863 23:11:34 -- nvmf/common.sh@296 -- # x722=() 00:15:12.863 23:11:34 -- nvmf/common.sh@296 -- # local -ga x722 00:15:12.863 23:11:34 -- nvmf/common.sh@297 -- # mlx=() 00:15:12.863 23:11:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:12.863 23:11:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.863 23:11:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:12.863 23:11:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:12.863 23:11:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.863 23:11:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:12.863 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:12.863 23:11:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.863 23:11:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:12.863 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:12.863 23:11:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.863 23:11:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.863 23:11:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.863 23:11:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:12.863 Found net devices under 0000:31:00.0: cvl_0_0 00:15:12.863 23:11:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.863 23:11:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.863 23:11:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.863 23:11:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.863 23:11:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:12.863 Found net devices under 0000:31:00.1: cvl_0_1 00:15:12.863 23:11:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.863 23:11:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:12.863 23:11:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:12.863 23:11:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:12.863 23:11:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.863 23:11:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.863 23:11:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.863 23:11:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:12.863 23:11:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.863 23:11:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.863 23:11:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:12.863 23:11:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.863 23:11:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.863 23:11:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:12.863 23:11:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:12.863 23:11:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.863 23:11:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.863 23:11:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.863 23:11:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.863 23:11:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:12.863 23:11:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.863 23:11:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.863 23:11:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.864 23:11:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:12.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:15:12.864 00:15:12.864 --- 10.0.0.2 ping statistics --- 00:15:12.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.864 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:15:12.864 23:11:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:15:12.864 00:15:12.864 --- 10.0.0.1 ping statistics --- 00:15:12.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.864 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:15:12.864 23:11:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.864 23:11:34 -- nvmf/common.sh@410 -- # return 0 00:15:12.864 23:11:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.864 23:11:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.864 23:11:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.864 23:11:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.864 23:11:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.864 23:11:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.864 23:11:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.864 23:11:34 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:12.864 23:11:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.864 23:11:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:12.864 23:11:34 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 23:11:34 -- nvmf/common.sh@469 -- # nvmfpid=2758064 00:15:12.864 23:11:34 -- nvmf/common.sh@470 -- # waitforlisten 2758064 00:15:12.864 23:11:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:12.864 23:11:34 -- common/autotest_common.sh@819 -- # '[' -z 2758064 ']' 00:15:12.864 23:11:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.864 23:11:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.864 23:11:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.864 23:11:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.864 23:11:34 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 [2024-06-07 23:11:34.552161] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:12.864 [2024-06-07 23:11:34.552222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.864 [2024-06-07 23:11:34.642332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.864 [2024-06-07 23:11:34.687418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:12.864 [2024-06-07 23:11:34.687578] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.864 [2024-06-07 23:11:34.687590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.864 [2024-06-07 23:11:34.687599] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.864 [2024-06-07 23:11:34.687744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.864 [2024-06-07 23:11:34.687906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.864 [2024-06-07 23:11:34.687906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.864 23:11:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.864 23:11:35 -- common/autotest_common.sh@852 -- # return 0 00:15:12.864 23:11:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.864 23:11:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 23:11:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.864 23:11:35 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.864 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 [2024-06-07 23:11:35.355311] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.864 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.864 23:11:35 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.864 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.864 23:11:35 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.864 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 [2024-06-07 23:11:35.379700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.864 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.864 23:11:35 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.864 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 NULL1 00:15:12.864 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.864 23:11:35 -- target/connect_stress.sh@21 -- # PERF_PID=2758257 00:15:12.864 23:11:35 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.864 23:11:35 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:12.864 23:11:35 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.864 23:11:35 -- target/connect_stress.sh@28 -- # cat 00:15:12.864 23:11:35 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:12.864 23:11:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.864 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.864 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:13.434 23:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.434 23:11:35 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:13.434 23:11:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.434 23:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.434 23:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:13.694 23:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.694 23:11:36 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:13.694 23:11:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.694 23:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.694 23:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:13.954 23:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.954 23:11:36 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:13.954 23:11:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.954 23:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.954 23:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.214 23:11:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.214 23:11:36 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:14.214 23:11:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.214 23:11:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.214 23:11:36 -- common/autotest_common.sh@10 -- # set +x 00:15:14.474 23:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.474 23:11:37 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:14.474 23:11:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.474 23:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.474 23:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.044 23:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.044 23:11:37 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:15.044 23:11:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.044 23:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.044 23:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.304 23:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.304 23:11:37 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:15.304 23:11:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.304 23:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.304 23:11:37 -- common/autotest_common.sh@10 -- # set +x 00:15:15.564 23:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.564 23:11:38 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:15.564 23:11:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.564 23:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.564 23:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:15.824 23:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.824 23:11:38 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:15.824 23:11:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.824 23:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.824 23:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:16.085 23:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.085 23:11:38 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:16.085 23:11:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.085 23:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.085 23:11:38 -- common/autotest_common.sh@10 -- # set +x 00:15:16.656 23:11:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.656 23:11:39 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:16.656 23:11:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.656 23:11:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.656 23:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 23:11:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.916 23:11:39 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:16.916 23:11:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.916 23:11:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.916 23:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:17.176 23:11:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.176 23:11:39 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:17.176 23:11:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.176 23:11:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.176 23:11:39 -- common/autotest_common.sh@10 -- # set +x 00:15:17.436 23:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.436 23:11:40 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:17.436 23:11:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.436 23:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.436 23:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:17.754 23:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.754 23:11:40 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:17.754 23:11:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.754 23:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.754 23:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:18.329 23:11:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.329 23:11:40 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:18.329 23:11:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.329 23:11:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.329 23:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:18.588 23:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.588 23:11:41 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:18.588 23:11:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.588 23:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.588 23:11:41 -- common/autotest_common.sh@10 -- # set +x 00:15:18.849 23:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.849 23:11:41 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:18.849 23:11:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.849 23:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.849 23:11:41 -- common/autotest_common.sh@10 -- # set +x 00:15:19.109 23:11:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.109 23:11:41 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:19.109 23:11:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.109 23:11:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.109 23:11:41 -- common/autotest_common.sh@10 -- # set +x 00:15:19.370 23:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.370 23:11:42 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:19.370 23:11:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.370 23:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.370 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:19.942 23:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.942 23:11:42 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:19.942 23:11:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.942 23:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.942 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:20.202 23:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.202 23:11:42 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:20.202 23:11:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.202 23:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.202 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:20.463 23:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.463 23:11:42 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:20.463 23:11:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.463 23:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.463 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:20.723 23:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.723 23:11:43 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:20.723 23:11:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.723 23:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.723 23:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:20.983 23:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.983 23:11:43 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:20.983 23:11:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.983 23:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.983 23:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:21.554 23:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.554 23:11:43 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:21.554 23:11:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.554 23:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.554 23:11:43 -- common/autotest_common.sh@10 -- # set +x 00:15:21.814 23:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.814 23:11:44 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:21.814 23:11:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.814 23:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.814 23:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:22.074 23:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.074 23:11:44 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:22.074 23:11:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.074 23:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.074 23:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:22.399 23:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.399 23:11:44 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:22.399 23:11:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.399 23:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.399 23:11:44 -- common/autotest_common.sh@10 -- # set +x 00:15:22.659 23:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.659 23:11:45 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:22.659 23:11:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.659 23:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.659 23:11:45 -- common/autotest_common.sh@10 -- # set +x 00:15:22.920 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:22.920 23:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.920 23:11:45 -- target/connect_stress.sh@34 -- # kill -0 2758257 00:15:22.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2758257) - No such process 00:15:22.920 23:11:45 -- target/connect_stress.sh@38 -- # wait 2758257 00:15:22.920 23:11:45 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:22.920 23:11:45 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:22.920 23:11:45 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:22.920 23:11:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:22.920 23:11:45 -- nvmf/common.sh@116 -- # sync 00:15:22.920 23:11:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:22.920 23:11:45 -- nvmf/common.sh@119 -- # set +e 00:15:22.920 23:11:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.281 23:11:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:23.281 rmmod nvme_tcp 00:15:23.281 rmmod nvme_fabrics 00:15:23.281 rmmod nvme_keyring 00:15:23.281 23:11:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.281 23:11:45 -- nvmf/common.sh@123 -- # set -e 00:15:23.281 23:11:45 -- nvmf/common.sh@124 -- # return 0 00:15:23.281 23:11:45 -- nvmf/common.sh@477 -- # '[' -n 2758064 ']' 00:15:23.281 23:11:45 -- nvmf/common.sh@478 -- # killprocess 2758064 00:15:23.281 23:11:45 -- common/autotest_common.sh@926 -- # '[' -z 2758064 ']' 00:15:23.281 23:11:45 -- common/autotest_common.sh@930 -- # kill -0 2758064 00:15:23.281 23:11:45 -- common/autotest_common.sh@931 -- # uname 00:15:23.281 23:11:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.281 23:11:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2758064 00:15:23.282 23:11:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:23.282 23:11:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:23.282 23:11:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2758064' 00:15:23.282 killing process with pid 2758064 00:15:23.282 23:11:45 -- common/autotest_common.sh@945 -- # kill 2758064 00:15:23.282 23:11:45 -- common/autotest_common.sh@950 -- # wait 2758064 00:15:23.282 23:11:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:23.282 23:11:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:23.282 23:11:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:23.282 23:11:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.282 23:11:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:23.282 23:11:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.282 23:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.282 23:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.824 23:11:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:25.824 00:15:25.824 real 0m20.731s 00:15:25.824 user 0m42.005s 00:15:25.824 sys 0m8.605s 00:15:25.824 23:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.824 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:15:25.824 ************************************ 00:15:25.824 END TEST nvmf_connect_stress 00:15:25.824 ************************************ 00:15:25.824 23:11:47 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:25.824 23:11:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.824 23:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.824 23:11:47 -- common/autotest_common.sh@10 -- # set +x 00:15:25.824 ************************************ 00:15:25.824 START TEST nvmf_fused_ordering 00:15:25.824 ************************************ 00:15:25.824 23:11:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:25.824 * Looking for test storage... 00:15:25.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.824 23:11:48 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.824 23:11:48 -- nvmf/common.sh@7 -- # uname -s 00:15:25.824 23:11:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.824 23:11:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.824 23:11:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.824 23:11:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.824 23:11:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.824 23:11:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.824 23:11:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.824 23:11:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.824 23:11:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.824 23:11:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.824 23:11:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:25.824 23:11:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:25.824 23:11:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.824 23:11:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.824 23:11:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.824 23:11:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.824 23:11:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.824 23:11:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.824 23:11:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.825 23:11:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.825 23:11:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.825 23:11:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.825 23:11:48 -- paths/export.sh@5 -- # export PATH 00:15:25.825 23:11:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.825 23:11:48 -- nvmf/common.sh@46 -- # : 0 00:15:25.825 23:11:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.825 23:11:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.825 23:11:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.825 23:11:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.825 23:11:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.825 23:11:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.825 23:11:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.825 23:11:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.825 23:11:48 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:25.825 23:11:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.825 23:11:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.825 23:11:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.825 23:11:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.825 23:11:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.825 23:11:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.825 23:11:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.825 23:11:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.825 23:11:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:25.825 23:11:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:25.825 23:11:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:25.825 23:11:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.413 23:11:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:32.413 23:11:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:32.413 23:11:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:32.413 23:11:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:32.413 23:11:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:32.413 23:11:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:32.413 23:11:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:32.413 23:11:55 -- nvmf/common.sh@294 -- # net_devs=() 00:15:32.413 23:11:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:32.413 23:11:55 -- nvmf/common.sh@295 -- # e810=() 00:15:32.413 23:11:55 -- nvmf/common.sh@295 -- # local -ga e810 00:15:32.413 23:11:55 -- nvmf/common.sh@296 -- # x722=() 00:15:32.413 23:11:55 -- nvmf/common.sh@296 -- # local -ga x722 00:15:32.413 23:11:55 -- nvmf/common.sh@297 -- # mlx=() 00:15:32.413 23:11:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:32.413 23:11:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.413 23:11:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:32.413 23:11:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:32.413 23:11:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:32.413 23:11:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:32.413 23:11:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:32.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:32.413 23:11:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.413 23:11:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:32.414 23:11:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:32.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:32.414 23:11:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:32.414 23:11:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:32.414 23:11:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.414 23:11:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:32.414 23:11:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.414 23:11:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:32.414 Found net devices under 0000:31:00.0: cvl_0_0 00:15:32.414 23:11:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.414 23:11:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:32.414 23:11:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.414 23:11:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:32.414 23:11:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.414 23:11:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:32.414 Found net devices under 0000:31:00.1: cvl_0_1 00:15:32.414 23:11:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.414 23:11:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:32.414 23:11:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:32.414 23:11:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:32.414 23:11:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:32.414 23:11:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.414 23:11:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.414 23:11:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.414 23:11:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:32.414 23:11:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.414 23:11:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.414 23:11:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:32.414 23:11:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.414 23:11:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.414 23:11:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:32.414 23:11:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:32.414 23:11:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.414 23:11:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.674 23:11:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.674 23:11:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.674 23:11:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:32.674 23:11:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.674 23:11:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.674 23:11:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.674 23:11:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:32.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:15:32.674 00:15:32.674 --- 10.0.0.2 ping statistics --- 00:15:32.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.674 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:15:32.674 23:11:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:15:32.935 00:15:32.935 --- 10.0.0.1 ping statistics --- 00:15:32.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.935 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:15:32.935 23:11:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.935 23:11:55 -- nvmf/common.sh@410 -- # return 0 00:15:32.935 23:11:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:32.935 23:11:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.935 23:11:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:32.935 23:11:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:32.935 23:11:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.935 23:11:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:32.935 23:11:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:32.935 23:11:55 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:32.935 23:11:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:32.935 23:11:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:32.935 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 23:11:55 -- nvmf/common.sh@469 -- # nvmfpid=2764550 00:15:32.935 23:11:55 -- nvmf/common.sh@470 -- # waitforlisten 2764550 00:15:32.935 23:11:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.935 23:11:55 -- common/autotest_common.sh@819 -- # '[' -z 2764550 ']' 00:15:32.935 23:11:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.935 23:11:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:32.935 23:11:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.935 23:11:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:32.935 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 [2024-06-07 23:11:55.446251] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:32.935 [2024-06-07 23:11:55.446301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.935 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.935 [2024-06-07 23:11:55.531758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.935 [2024-06-07 23:11:55.563959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:32.935 [2024-06-07 23:11:55.564092] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.935 [2024-06-07 23:11:55.564102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.935 [2024-06-07 23:11:55.564109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.935 [2024-06-07 23:11:55.564130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.877 23:11:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:33.877 23:11:56 -- common/autotest_common.sh@852 -- # return 0 00:15:33.877 23:11:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:33.877 23:11:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 23:11:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.877 23:11:56 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 [2024-06-07 23:11:56.259571] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 [2024-06-07 23:11:56.275814] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 NULL1 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.877 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.877 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.877 23:11:56 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:33.877 [2024-06-07 23:11:56.330269] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:33.877 [2024-06-07 23:11:56.330312] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764756 ] 00:15:33.877 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.450 Attached to nqn.2016-06.io.spdk:cnode1 00:15:34.450 Namespace ID: 1 size: 1GB 00:15:34.450 fused_ordering(0) 00:15:34.450 fused_ordering(1) 00:15:34.450 fused_ordering(2) 00:15:34.450 fused_ordering(3) 00:15:34.450 fused_ordering(4) 00:15:34.450 fused_ordering(5) 00:15:34.450 fused_ordering(6) 00:15:34.450 fused_ordering(7) 00:15:34.450 fused_ordering(8) 00:15:34.450 fused_ordering(9) 00:15:34.450 fused_ordering(10) 00:15:34.450 fused_ordering(11) 00:15:34.450 fused_ordering(12) 00:15:34.450 fused_ordering(13) 00:15:34.450 fused_ordering(14) 00:15:34.450 fused_ordering(15) 00:15:34.450 fused_ordering(16) 00:15:34.450 fused_ordering(17) 00:15:34.450 fused_ordering(18) 00:15:34.450 fused_ordering(19) 00:15:34.450 fused_ordering(20) 00:15:34.450 fused_ordering(21) 00:15:34.450 fused_ordering(22) 00:15:34.450 fused_ordering(23) 00:15:34.450 fused_ordering(24) 00:15:34.450 fused_ordering(25) 00:15:34.450 fused_ordering(26) 00:15:34.450 fused_ordering(27) 00:15:34.450 fused_ordering(28) 00:15:34.450 fused_ordering(29) 00:15:34.450 fused_ordering(30) 00:15:34.450 fused_ordering(31) 00:15:34.450 fused_ordering(32) 00:15:34.450 fused_ordering(33) 00:15:34.450 fused_ordering(34) 00:15:34.450 fused_ordering(35) 00:15:34.450 fused_ordering(36) 00:15:34.450 fused_ordering(37) 00:15:34.450 fused_ordering(38) 00:15:34.450 fused_ordering(39) 00:15:34.450 fused_ordering(40) 00:15:34.450 fused_ordering(41) 00:15:34.450 fused_ordering(42) 00:15:34.450 fused_ordering(43) 00:15:34.450 fused_ordering(44) 00:15:34.450 fused_ordering(45) 00:15:34.450 fused_ordering(46) 00:15:34.450 fused_ordering(47) 00:15:34.450 fused_ordering(48) 00:15:34.450 fused_ordering(49) 00:15:34.450 fused_ordering(50) 00:15:34.450 fused_ordering(51) 00:15:34.450 fused_ordering(52) 00:15:34.450 fused_ordering(53) 00:15:34.450 fused_ordering(54) 00:15:34.450 fused_ordering(55) 00:15:34.450 fused_ordering(56) 00:15:34.450 fused_ordering(57) 00:15:34.450 fused_ordering(58) 00:15:34.450 fused_ordering(59) 00:15:34.450 fused_ordering(60) 00:15:34.450 fused_ordering(61) 00:15:34.450 fused_ordering(62) 00:15:34.450 fused_ordering(63) 00:15:34.450 fused_ordering(64) 00:15:34.450 fused_ordering(65) 00:15:34.450 fused_ordering(66) 00:15:34.450 fused_ordering(67) 00:15:34.450 fused_ordering(68) 00:15:34.450 fused_ordering(69) 00:15:34.450 fused_ordering(70) 00:15:34.450 fused_ordering(71) 00:15:34.450 fused_ordering(72) 00:15:34.450 fused_ordering(73) 00:15:34.450 fused_ordering(74) 00:15:34.450 fused_ordering(75) 00:15:34.450 fused_ordering(76) 00:15:34.450 fused_ordering(77) 00:15:34.450 fused_ordering(78) 00:15:34.450 fused_ordering(79) 00:15:34.450 fused_ordering(80) 00:15:34.450 fused_ordering(81) 00:15:34.450 fused_ordering(82) 00:15:34.450 fused_ordering(83) 00:15:34.450 fused_ordering(84) 00:15:34.450 fused_ordering(85) 00:15:34.450 fused_ordering(86) 00:15:34.450 fused_ordering(87) 00:15:34.450 fused_ordering(88) 00:15:34.450 fused_ordering(89) 00:15:34.450 fused_ordering(90) 00:15:34.450 fused_ordering(91) 00:15:34.450 fused_ordering(92) 00:15:34.450 fused_ordering(93) 00:15:34.450 fused_ordering(94) 00:15:34.450 fused_ordering(95) 00:15:34.450 fused_ordering(96) 00:15:34.450 fused_ordering(97) 00:15:34.450 fused_ordering(98) 00:15:34.450 fused_ordering(99) 00:15:34.450 fused_ordering(100) 00:15:34.451 fused_ordering(101) 00:15:34.451 fused_ordering(102) 00:15:34.451 fused_ordering(103) 00:15:34.451 fused_ordering(104) 00:15:34.451 fused_ordering(105) 00:15:34.451 fused_ordering(106) 00:15:34.451 fused_ordering(107) 00:15:34.451 fused_ordering(108) 00:15:34.451 fused_ordering(109) 00:15:34.451 fused_ordering(110) 00:15:34.451 fused_ordering(111) 00:15:34.451 fused_ordering(112) 00:15:34.451 fused_ordering(113) 00:15:34.451 fused_ordering(114) 00:15:34.451 fused_ordering(115) 00:15:34.451 fused_ordering(116) 00:15:34.451 fused_ordering(117) 00:15:34.451 fused_ordering(118) 00:15:34.451 fused_ordering(119) 00:15:34.451 fused_ordering(120) 00:15:34.451 fused_ordering(121) 00:15:34.451 fused_ordering(122) 00:15:34.451 fused_ordering(123) 00:15:34.451 fused_ordering(124) 00:15:34.451 fused_ordering(125) 00:15:34.451 fused_ordering(126) 00:15:34.451 fused_ordering(127) 00:15:34.451 fused_ordering(128) 00:15:34.451 fused_ordering(129) 00:15:34.451 fused_ordering(130) 00:15:34.451 fused_ordering(131) 00:15:34.451 fused_ordering(132) 00:15:34.451 fused_ordering(133) 00:15:34.451 fused_ordering(134) 00:15:34.451 fused_ordering(135) 00:15:34.451 fused_ordering(136) 00:15:34.451 fused_ordering(137) 00:15:34.451 fused_ordering(138) 00:15:34.451 fused_ordering(139) 00:15:34.451 fused_ordering(140) 00:15:34.451 fused_ordering(141) 00:15:34.451 fused_ordering(142) 00:15:34.451 fused_ordering(143) 00:15:34.451 fused_ordering(144) 00:15:34.451 fused_ordering(145) 00:15:34.451 fused_ordering(146) 00:15:34.451 fused_ordering(147) 00:15:34.451 fused_ordering(148) 00:15:34.451 fused_ordering(149) 00:15:34.451 fused_ordering(150) 00:15:34.451 fused_ordering(151) 00:15:34.451 fused_ordering(152) 00:15:34.451 fused_ordering(153) 00:15:34.451 fused_ordering(154) 00:15:34.451 fused_ordering(155) 00:15:34.451 fused_ordering(156) 00:15:34.451 fused_ordering(157) 00:15:34.451 fused_ordering(158) 00:15:34.451 fused_ordering(159) 00:15:34.451 fused_ordering(160) 00:15:34.451 fused_ordering(161) 00:15:34.451 fused_ordering(162) 00:15:34.451 fused_ordering(163) 00:15:34.451 fused_ordering(164) 00:15:34.451 fused_ordering(165) 00:15:34.451 fused_ordering(166) 00:15:34.451 fused_ordering(167) 00:15:34.451 fused_ordering(168) 00:15:34.451 fused_ordering(169) 00:15:34.451 fused_ordering(170) 00:15:34.451 fused_ordering(171) 00:15:34.451 fused_ordering(172) 00:15:34.451 fused_ordering(173) 00:15:34.451 fused_ordering(174) 00:15:34.451 fused_ordering(175) 00:15:34.451 fused_ordering(176) 00:15:34.451 fused_ordering(177) 00:15:34.451 fused_ordering(178) 00:15:34.451 fused_ordering(179) 00:15:34.451 fused_ordering(180) 00:15:34.451 fused_ordering(181) 00:15:34.451 fused_ordering(182) 00:15:34.451 fused_ordering(183) 00:15:34.451 fused_ordering(184) 00:15:34.451 fused_ordering(185) 00:15:34.451 fused_ordering(186) 00:15:34.451 fused_ordering(187) 00:15:34.451 fused_ordering(188) 00:15:34.451 fused_ordering(189) 00:15:34.451 fused_ordering(190) 00:15:34.451 fused_ordering(191) 00:15:34.451 fused_ordering(192) 00:15:34.451 fused_ordering(193) 00:15:34.451 fused_ordering(194) 00:15:34.451 fused_ordering(195) 00:15:34.451 fused_ordering(196) 00:15:34.451 fused_ordering(197) 00:15:34.451 fused_ordering(198) 00:15:34.451 fused_ordering(199) 00:15:34.451 fused_ordering(200) 00:15:34.451 fused_ordering(201) 00:15:34.451 fused_ordering(202) 00:15:34.451 fused_ordering(203) 00:15:34.451 fused_ordering(204) 00:15:34.451 fused_ordering(205) 00:15:34.712 fused_ordering(206) 00:15:34.712 fused_ordering(207) 00:15:34.712 fused_ordering(208) 00:15:34.712 fused_ordering(209) 00:15:34.712 fused_ordering(210) 00:15:34.712 fused_ordering(211) 00:15:34.712 fused_ordering(212) 00:15:34.712 fused_ordering(213) 00:15:34.712 fused_ordering(214) 00:15:34.712 fused_ordering(215) 00:15:34.712 fused_ordering(216) 00:15:34.712 fused_ordering(217) 00:15:34.712 fused_ordering(218) 00:15:34.712 fused_ordering(219) 00:15:34.712 fused_ordering(220) 00:15:34.712 fused_ordering(221) 00:15:34.712 fused_ordering(222) 00:15:34.712 fused_ordering(223) 00:15:34.712 fused_ordering(224) 00:15:34.712 fused_ordering(225) 00:15:34.712 fused_ordering(226) 00:15:34.712 fused_ordering(227) 00:15:34.712 fused_ordering(228) 00:15:34.712 fused_ordering(229) 00:15:34.712 fused_ordering(230) 00:15:34.712 fused_ordering(231) 00:15:34.712 fused_ordering(232) 00:15:34.712 fused_ordering(233) 00:15:34.712 fused_ordering(234) 00:15:34.712 fused_ordering(235) 00:15:34.712 fused_ordering(236) 00:15:34.712 fused_ordering(237) 00:15:34.712 fused_ordering(238) 00:15:34.712 fused_ordering(239) 00:15:34.712 fused_ordering(240) 00:15:34.712 fused_ordering(241) 00:15:34.712 fused_ordering(242) 00:15:34.712 fused_ordering(243) 00:15:34.712 fused_ordering(244) 00:15:34.712 fused_ordering(245) 00:15:34.712 fused_ordering(246) 00:15:34.712 fused_ordering(247) 00:15:34.712 fused_ordering(248) 00:15:34.712 fused_ordering(249) 00:15:34.712 fused_ordering(250) 00:15:34.712 fused_ordering(251) 00:15:34.712 fused_ordering(252) 00:15:34.712 fused_ordering(253) 00:15:34.712 fused_ordering(254) 00:15:34.712 fused_ordering(255) 00:15:34.712 fused_ordering(256) 00:15:34.712 fused_ordering(257) 00:15:34.712 fused_ordering(258) 00:15:34.712 fused_ordering(259) 00:15:34.712 fused_ordering(260) 00:15:34.712 fused_ordering(261) 00:15:34.712 fused_ordering(262) 00:15:34.712 fused_ordering(263) 00:15:34.712 fused_ordering(264) 00:15:34.712 fused_ordering(265) 00:15:34.712 fused_ordering(266) 00:15:34.712 fused_ordering(267) 00:15:34.712 fused_ordering(268) 00:15:34.712 fused_ordering(269) 00:15:34.712 fused_ordering(270) 00:15:34.712 fused_ordering(271) 00:15:34.712 fused_ordering(272) 00:15:34.712 fused_ordering(273) 00:15:34.712 fused_ordering(274) 00:15:34.712 fused_ordering(275) 00:15:34.712 fused_ordering(276) 00:15:34.712 fused_ordering(277) 00:15:34.712 fused_ordering(278) 00:15:34.712 fused_ordering(279) 00:15:34.712 fused_ordering(280) 00:15:34.712 fused_ordering(281) 00:15:34.712 fused_ordering(282) 00:15:34.712 fused_ordering(283) 00:15:34.712 fused_ordering(284) 00:15:34.712 fused_ordering(285) 00:15:34.712 fused_ordering(286) 00:15:34.712 fused_ordering(287) 00:15:34.712 fused_ordering(288) 00:15:34.712 fused_ordering(289) 00:15:34.712 fused_ordering(290) 00:15:34.712 fused_ordering(291) 00:15:34.712 fused_ordering(292) 00:15:34.712 fused_ordering(293) 00:15:34.712 fused_ordering(294) 00:15:34.712 fused_ordering(295) 00:15:34.712 fused_ordering(296) 00:15:34.712 fused_ordering(297) 00:15:34.712 fused_ordering(298) 00:15:34.712 fused_ordering(299) 00:15:34.712 fused_ordering(300) 00:15:34.712 fused_ordering(301) 00:15:34.712 fused_ordering(302) 00:15:34.712 fused_ordering(303) 00:15:34.712 fused_ordering(304) 00:15:34.712 fused_ordering(305) 00:15:34.712 fused_ordering(306) 00:15:34.712 fused_ordering(307) 00:15:34.712 fused_ordering(308) 00:15:34.712 fused_ordering(309) 00:15:34.712 fused_ordering(310) 00:15:34.712 fused_ordering(311) 00:15:34.712 fused_ordering(312) 00:15:34.712 fused_ordering(313) 00:15:34.712 fused_ordering(314) 00:15:34.712 fused_ordering(315) 00:15:34.712 fused_ordering(316) 00:15:34.712 fused_ordering(317) 00:15:34.712 fused_ordering(318) 00:15:34.712 fused_ordering(319) 00:15:34.712 fused_ordering(320) 00:15:34.712 fused_ordering(321) 00:15:34.712 fused_ordering(322) 00:15:34.712 fused_ordering(323) 00:15:34.712 fused_ordering(324) 00:15:34.712 fused_ordering(325) 00:15:34.712 fused_ordering(326) 00:15:34.712 fused_ordering(327) 00:15:34.712 fused_ordering(328) 00:15:34.712 fused_ordering(329) 00:15:34.712 fused_ordering(330) 00:15:34.712 fused_ordering(331) 00:15:34.712 fused_ordering(332) 00:15:34.712 fused_ordering(333) 00:15:34.712 fused_ordering(334) 00:15:34.712 fused_ordering(335) 00:15:34.712 fused_ordering(336) 00:15:34.712 fused_ordering(337) 00:15:34.712 fused_ordering(338) 00:15:34.712 fused_ordering(339) 00:15:34.712 fused_ordering(340) 00:15:34.712 fused_ordering(341) 00:15:34.712 fused_ordering(342) 00:15:34.712 fused_ordering(343) 00:15:34.712 fused_ordering(344) 00:15:34.712 fused_ordering(345) 00:15:34.712 fused_ordering(346) 00:15:34.712 fused_ordering(347) 00:15:34.712 fused_ordering(348) 00:15:34.712 fused_ordering(349) 00:15:34.712 fused_ordering(350) 00:15:34.712 fused_ordering(351) 00:15:34.712 fused_ordering(352) 00:15:34.712 fused_ordering(353) 00:15:34.712 fused_ordering(354) 00:15:34.712 fused_ordering(355) 00:15:34.712 fused_ordering(356) 00:15:34.712 fused_ordering(357) 00:15:34.712 fused_ordering(358) 00:15:34.712 fused_ordering(359) 00:15:34.712 fused_ordering(360) 00:15:34.712 fused_ordering(361) 00:15:34.712 fused_ordering(362) 00:15:34.712 fused_ordering(363) 00:15:34.712 fused_ordering(364) 00:15:34.712 fused_ordering(365) 00:15:34.712 fused_ordering(366) 00:15:34.712 fused_ordering(367) 00:15:34.712 fused_ordering(368) 00:15:34.712 fused_ordering(369) 00:15:34.712 fused_ordering(370) 00:15:34.712 fused_ordering(371) 00:15:34.712 fused_ordering(372) 00:15:34.712 fused_ordering(373) 00:15:34.712 fused_ordering(374) 00:15:34.712 fused_ordering(375) 00:15:34.712 fused_ordering(376) 00:15:34.712 fused_ordering(377) 00:15:34.712 fused_ordering(378) 00:15:34.712 fused_ordering(379) 00:15:34.712 fused_ordering(380) 00:15:34.712 fused_ordering(381) 00:15:34.712 fused_ordering(382) 00:15:34.712 fused_ordering(383) 00:15:34.712 fused_ordering(384) 00:15:34.712 fused_ordering(385) 00:15:34.712 fused_ordering(386) 00:15:34.712 fused_ordering(387) 00:15:34.712 fused_ordering(388) 00:15:34.712 fused_ordering(389) 00:15:34.712 fused_ordering(390) 00:15:34.712 fused_ordering(391) 00:15:34.712 fused_ordering(392) 00:15:34.712 fused_ordering(393) 00:15:34.712 fused_ordering(394) 00:15:34.712 fused_ordering(395) 00:15:34.712 fused_ordering(396) 00:15:34.712 fused_ordering(397) 00:15:34.712 fused_ordering(398) 00:15:34.712 fused_ordering(399) 00:15:34.712 fused_ordering(400) 00:15:34.712 fused_ordering(401) 00:15:34.712 fused_ordering(402) 00:15:34.712 fused_ordering(403) 00:15:34.712 fused_ordering(404) 00:15:34.712 fused_ordering(405) 00:15:34.712 fused_ordering(406) 00:15:34.712 fused_ordering(407) 00:15:34.712 fused_ordering(408) 00:15:34.712 fused_ordering(409) 00:15:34.712 fused_ordering(410) 00:15:34.974 fused_ordering(411) 00:15:34.974 fused_ordering(412) 00:15:34.974 fused_ordering(413) 00:15:34.974 fused_ordering(414) 00:15:34.974 fused_ordering(415) 00:15:34.974 fused_ordering(416) 00:15:34.974 fused_ordering(417) 00:15:34.974 fused_ordering(418) 00:15:34.974 fused_ordering(419) 00:15:34.974 fused_ordering(420) 00:15:34.974 fused_ordering(421) 00:15:34.974 fused_ordering(422) 00:15:34.974 fused_ordering(423) 00:15:34.974 fused_ordering(424) 00:15:34.974 fused_ordering(425) 00:15:34.974 fused_ordering(426) 00:15:34.974 fused_ordering(427) 00:15:34.974 fused_ordering(428) 00:15:34.974 fused_ordering(429) 00:15:34.974 fused_ordering(430) 00:15:34.974 fused_ordering(431) 00:15:34.974 fused_ordering(432) 00:15:34.974 fused_ordering(433) 00:15:34.974 fused_ordering(434) 00:15:34.974 fused_ordering(435) 00:15:34.974 fused_ordering(436) 00:15:34.974 fused_ordering(437) 00:15:34.974 fused_ordering(438) 00:15:34.974 fused_ordering(439) 00:15:34.974 fused_ordering(440) 00:15:34.974 fused_ordering(441) 00:15:34.974 fused_ordering(442) 00:15:34.974 fused_ordering(443) 00:15:34.974 fused_ordering(444) 00:15:34.974 fused_ordering(445) 00:15:34.974 fused_ordering(446) 00:15:34.974 fused_ordering(447) 00:15:34.974 fused_ordering(448) 00:15:34.974 fused_ordering(449) 00:15:34.974 fused_ordering(450) 00:15:34.974 fused_ordering(451) 00:15:34.974 fused_ordering(452) 00:15:34.974 fused_ordering(453) 00:15:34.974 fused_ordering(454) 00:15:34.974 fused_ordering(455) 00:15:34.974 fused_ordering(456) 00:15:34.974 fused_ordering(457) 00:15:34.974 fused_ordering(458) 00:15:34.974 fused_ordering(459) 00:15:34.974 fused_ordering(460) 00:15:34.974 fused_ordering(461) 00:15:34.974 fused_ordering(462) 00:15:34.974 fused_ordering(463) 00:15:34.974 fused_ordering(464) 00:15:34.974 fused_ordering(465) 00:15:34.974 fused_ordering(466) 00:15:34.974 fused_ordering(467) 00:15:34.974 fused_ordering(468) 00:15:34.974 fused_ordering(469) 00:15:34.974 fused_ordering(470) 00:15:34.974 fused_ordering(471) 00:15:34.974 fused_ordering(472) 00:15:34.974 fused_ordering(473) 00:15:34.974 fused_ordering(474) 00:15:34.974 fused_ordering(475) 00:15:34.974 fused_ordering(476) 00:15:34.974 fused_ordering(477) 00:15:34.974 fused_ordering(478) 00:15:34.974 fused_ordering(479) 00:15:34.974 fused_ordering(480) 00:15:34.974 fused_ordering(481) 00:15:34.974 fused_ordering(482) 00:15:34.974 fused_ordering(483) 00:15:34.974 fused_ordering(484) 00:15:34.974 fused_ordering(485) 00:15:34.974 fused_ordering(486) 00:15:34.974 fused_ordering(487) 00:15:34.974 fused_ordering(488) 00:15:34.974 fused_ordering(489) 00:15:34.974 fused_ordering(490) 00:15:34.974 fused_ordering(491) 00:15:34.974 fused_ordering(492) 00:15:34.974 fused_ordering(493) 00:15:34.974 fused_ordering(494) 00:15:34.974 fused_ordering(495) 00:15:34.974 fused_ordering(496) 00:15:34.974 fused_ordering(497) 00:15:34.974 fused_ordering(498) 00:15:34.974 fused_ordering(499) 00:15:34.974 fused_ordering(500) 00:15:34.974 fused_ordering(501) 00:15:34.974 fused_ordering(502) 00:15:34.974 fused_ordering(503) 00:15:34.974 fused_ordering(504) 00:15:34.974 fused_ordering(505) 00:15:34.974 fused_ordering(506) 00:15:34.974 fused_ordering(507) 00:15:34.974 fused_ordering(508) 00:15:34.974 fused_ordering(509) 00:15:34.974 fused_ordering(510) 00:15:34.974 fused_ordering(511) 00:15:34.974 fused_ordering(512) 00:15:34.974 fused_ordering(513) 00:15:34.974 fused_ordering(514) 00:15:34.974 fused_ordering(515) 00:15:34.974 fused_ordering(516) 00:15:34.974 fused_ordering(517) 00:15:34.974 fused_ordering(518) 00:15:34.974 fused_ordering(519) 00:15:34.974 fused_ordering(520) 00:15:34.974 fused_ordering(521) 00:15:34.974 fused_ordering(522) 00:15:34.974 fused_ordering(523) 00:15:34.974 fused_ordering(524) 00:15:34.974 fused_ordering(525) 00:15:34.974 fused_ordering(526) 00:15:34.974 fused_ordering(527) 00:15:34.974 fused_ordering(528) 00:15:34.974 fused_ordering(529) 00:15:34.974 fused_ordering(530) 00:15:34.974 fused_ordering(531) 00:15:34.974 fused_ordering(532) 00:15:34.974 fused_ordering(533) 00:15:34.974 fused_ordering(534) 00:15:34.974 fused_ordering(535) 00:15:34.974 fused_ordering(536) 00:15:34.974 fused_ordering(537) 00:15:34.974 fused_ordering(538) 00:15:34.974 fused_ordering(539) 00:15:34.974 fused_ordering(540) 00:15:34.974 fused_ordering(541) 00:15:34.974 fused_ordering(542) 00:15:34.974 fused_ordering(543) 00:15:34.974 fused_ordering(544) 00:15:34.974 fused_ordering(545) 00:15:34.974 fused_ordering(546) 00:15:34.974 fused_ordering(547) 00:15:34.974 fused_ordering(548) 00:15:34.974 fused_ordering(549) 00:15:34.974 fused_ordering(550) 00:15:34.974 fused_ordering(551) 00:15:34.974 fused_ordering(552) 00:15:34.974 fused_ordering(553) 00:15:34.974 fused_ordering(554) 00:15:34.974 fused_ordering(555) 00:15:34.974 fused_ordering(556) 00:15:34.974 fused_ordering(557) 00:15:34.974 fused_ordering(558) 00:15:34.974 fused_ordering(559) 00:15:34.974 fused_ordering(560) 00:15:34.974 fused_ordering(561) 00:15:34.974 fused_ordering(562) 00:15:34.974 fused_ordering(563) 00:15:34.974 fused_ordering(564) 00:15:34.974 fused_ordering(565) 00:15:34.974 fused_ordering(566) 00:15:34.974 fused_ordering(567) 00:15:34.974 fused_ordering(568) 00:15:34.974 fused_ordering(569) 00:15:34.974 fused_ordering(570) 00:15:34.974 fused_ordering(571) 00:15:34.974 fused_ordering(572) 00:15:34.974 fused_ordering(573) 00:15:34.974 fused_ordering(574) 00:15:34.974 fused_ordering(575) 00:15:34.974 fused_ordering(576) 00:15:34.974 fused_ordering(577) 00:15:34.974 fused_ordering(578) 00:15:34.974 fused_ordering(579) 00:15:34.974 fused_ordering(580) 00:15:34.974 fused_ordering(581) 00:15:34.974 fused_ordering(582) 00:15:34.974 fused_ordering(583) 00:15:34.974 fused_ordering(584) 00:15:34.974 fused_ordering(585) 00:15:34.974 fused_ordering(586) 00:15:34.974 fused_ordering(587) 00:15:34.974 fused_ordering(588) 00:15:34.974 fused_ordering(589) 00:15:34.974 fused_ordering(590) 00:15:34.974 fused_ordering(591) 00:15:34.974 fused_ordering(592) 00:15:34.974 fused_ordering(593) 00:15:34.974 fused_ordering(594) 00:15:34.974 fused_ordering(595) 00:15:34.974 fused_ordering(596) 00:15:34.974 fused_ordering(597) 00:15:34.974 fused_ordering(598) 00:15:34.974 fused_ordering(599) 00:15:34.974 fused_ordering(600) 00:15:34.974 fused_ordering(601) 00:15:34.974 fused_ordering(602) 00:15:34.974 fused_ordering(603) 00:15:34.974 fused_ordering(604) 00:15:34.974 fused_ordering(605) 00:15:34.974 fused_ordering(606) 00:15:34.974 fused_ordering(607) 00:15:34.974 fused_ordering(608) 00:15:34.974 fused_ordering(609) 00:15:34.974 fused_ordering(610) 00:15:34.974 fused_ordering(611) 00:15:34.974 fused_ordering(612) 00:15:34.974 fused_ordering(613) 00:15:34.974 fused_ordering(614) 00:15:34.974 fused_ordering(615) 00:15:35.547 fused_ordering(616) 00:15:35.547 fused_ordering(617) 00:15:35.547 fused_ordering(618) 00:15:35.547 fused_ordering(619) 00:15:35.547 fused_ordering(620) 00:15:35.547 fused_ordering(621) 00:15:35.547 fused_ordering(622) 00:15:35.547 fused_ordering(623) 00:15:35.547 fused_ordering(624) 00:15:35.547 fused_ordering(625) 00:15:35.547 fused_ordering(626) 00:15:35.547 fused_ordering(627) 00:15:35.547 fused_ordering(628) 00:15:35.547 fused_ordering(629) 00:15:35.547 fused_ordering(630) 00:15:35.547 fused_ordering(631) 00:15:35.547 fused_ordering(632) 00:15:35.547 fused_ordering(633) 00:15:35.547 fused_ordering(634) 00:15:35.547 fused_ordering(635) 00:15:35.547 fused_ordering(636) 00:15:35.547 fused_ordering(637) 00:15:35.547 fused_ordering(638) 00:15:35.547 fused_ordering(639) 00:15:35.547 fused_ordering(640) 00:15:35.547 fused_ordering(641) 00:15:35.547 fused_ordering(642) 00:15:35.547 fused_ordering(643) 00:15:35.547 fused_ordering(644) 00:15:35.547 fused_ordering(645) 00:15:35.547 fused_ordering(646) 00:15:35.547 fused_ordering(647) 00:15:35.547 fused_ordering(648) 00:15:35.547 fused_ordering(649) 00:15:35.547 fused_ordering(650) 00:15:35.547 fused_ordering(651) 00:15:35.547 fused_ordering(652) 00:15:35.547 fused_ordering(653) 00:15:35.547 fused_ordering(654) 00:15:35.547 fused_ordering(655) 00:15:35.547 fused_ordering(656) 00:15:35.547 fused_ordering(657) 00:15:35.547 fused_ordering(658) 00:15:35.547 fused_ordering(659) 00:15:35.547 fused_ordering(660) 00:15:35.547 fused_ordering(661) 00:15:35.547 fused_ordering(662) 00:15:35.547 fused_ordering(663) 00:15:35.547 fused_ordering(664) 00:15:35.547 fused_ordering(665) 00:15:35.547 fused_ordering(666) 00:15:35.547 fused_ordering(667) 00:15:35.547 fused_ordering(668) 00:15:35.547 fused_ordering(669) 00:15:35.547 fused_ordering(670) 00:15:35.547 fused_ordering(671) 00:15:35.547 fused_ordering(672) 00:15:35.547 fused_ordering(673) 00:15:35.547 fused_ordering(674) 00:15:35.547 fused_ordering(675) 00:15:35.547 fused_ordering(676) 00:15:35.547 fused_ordering(677) 00:15:35.547 fused_ordering(678) 00:15:35.547 fused_ordering(679) 00:15:35.547 fused_ordering(680) 00:15:35.547 fused_ordering(681) 00:15:35.547 fused_ordering(682) 00:15:35.547 fused_ordering(683) 00:15:35.547 fused_ordering(684) 00:15:35.547 fused_ordering(685) 00:15:35.547 fused_ordering(686) 00:15:35.547 fused_ordering(687) 00:15:35.547 fused_ordering(688) 00:15:35.547 fused_ordering(689) 00:15:35.547 fused_ordering(690) 00:15:35.547 fused_ordering(691) 00:15:35.547 fused_ordering(692) 00:15:35.547 fused_ordering(693) 00:15:35.547 fused_ordering(694) 00:15:35.547 fused_ordering(695) 00:15:35.547 fused_ordering(696) 00:15:35.547 fused_ordering(697) 00:15:35.547 fused_ordering(698) 00:15:35.547 fused_ordering(699) 00:15:35.547 fused_ordering(700) 00:15:35.547 fused_ordering(701) 00:15:35.547 fused_ordering(702) 00:15:35.547 fused_ordering(703) 00:15:35.547 fused_ordering(704) 00:15:35.547 fused_ordering(705) 00:15:35.547 fused_ordering(706) 00:15:35.547 fused_ordering(707) 00:15:35.547 fused_ordering(708) 00:15:35.547 fused_ordering(709) 00:15:35.547 fused_ordering(710) 00:15:35.547 fused_ordering(711) 00:15:35.547 fused_ordering(712) 00:15:35.547 fused_ordering(713) 00:15:35.547 fused_ordering(714) 00:15:35.547 fused_ordering(715) 00:15:35.547 fused_ordering(716) 00:15:35.547 fused_ordering(717) 00:15:35.547 fused_ordering(718) 00:15:35.547 fused_ordering(719) 00:15:35.547 fused_ordering(720) 00:15:35.547 fused_ordering(721) 00:15:35.547 fused_ordering(722) 00:15:35.547 fused_ordering(723) 00:15:35.547 fused_ordering(724) 00:15:35.547 fused_ordering(725) 00:15:35.547 fused_ordering(726) 00:15:35.547 fused_ordering(727) 00:15:35.547 fused_ordering(728) 00:15:35.547 fused_ordering(729) 00:15:35.547 fused_ordering(730) 00:15:35.547 fused_ordering(731) 00:15:35.547 fused_ordering(732) 00:15:35.547 fused_ordering(733) 00:15:35.547 fused_ordering(734) 00:15:35.547 fused_ordering(735) 00:15:35.547 fused_ordering(736) 00:15:35.547 fused_ordering(737) 00:15:35.547 fused_ordering(738) 00:15:35.547 fused_ordering(739) 00:15:35.547 fused_ordering(740) 00:15:35.547 fused_ordering(741) 00:15:35.547 fused_ordering(742) 00:15:35.547 fused_ordering(743) 00:15:35.547 fused_ordering(744) 00:15:35.547 fused_ordering(745) 00:15:35.547 fused_ordering(746) 00:15:35.547 fused_ordering(747) 00:15:35.547 fused_ordering(748) 00:15:35.547 fused_ordering(749) 00:15:35.547 fused_ordering(750) 00:15:35.547 fused_ordering(751) 00:15:35.547 fused_ordering(752) 00:15:35.547 fused_ordering(753) 00:15:35.547 fused_ordering(754) 00:15:35.547 fused_ordering(755) 00:15:35.547 fused_ordering(756) 00:15:35.547 fused_ordering(757) 00:15:35.547 fused_ordering(758) 00:15:35.547 fused_ordering(759) 00:15:35.547 fused_ordering(760) 00:15:35.547 fused_ordering(761) 00:15:35.547 fused_ordering(762) 00:15:35.547 fused_ordering(763) 00:15:35.547 fused_ordering(764) 00:15:35.547 fused_ordering(765) 00:15:35.547 fused_ordering(766) 00:15:35.547 fused_ordering(767) 00:15:35.547 fused_ordering(768) 00:15:35.547 fused_ordering(769) 00:15:35.547 fused_ordering(770) 00:15:35.547 fused_ordering(771) 00:15:35.547 fused_ordering(772) 00:15:35.547 fused_ordering(773) 00:15:35.547 fused_ordering(774) 00:15:35.547 fused_ordering(775) 00:15:35.547 fused_ordering(776) 00:15:35.547 fused_ordering(777) 00:15:35.548 fused_ordering(778) 00:15:35.548 fused_ordering(779) 00:15:35.548 fused_ordering(780) 00:15:35.548 fused_ordering(781) 00:15:35.548 fused_ordering(782) 00:15:35.548 fused_ordering(783) 00:15:35.548 fused_ordering(784) 00:15:35.548 fused_ordering(785) 00:15:35.548 fused_ordering(786) 00:15:35.548 fused_ordering(787) 00:15:35.548 fused_ordering(788) 00:15:35.548 fused_ordering(789) 00:15:35.548 fused_ordering(790) 00:15:35.548 fused_ordering(791) 00:15:35.548 fused_ordering(792) 00:15:35.548 fused_ordering(793) 00:15:35.548 fused_ordering(794) 00:15:35.548 fused_ordering(795) 00:15:35.548 fused_ordering(796) 00:15:35.548 fused_ordering(797) 00:15:35.548 fused_ordering(798) 00:15:35.548 fused_ordering(799) 00:15:35.548 fused_ordering(800) 00:15:35.548 fused_ordering(801) 00:15:35.548 fused_ordering(802) 00:15:35.548 fused_ordering(803) 00:15:35.548 fused_ordering(804) 00:15:35.548 fused_ordering(805) 00:15:35.548 fused_ordering(806) 00:15:35.548 fused_ordering(807) 00:15:35.548 fused_ordering(808) 00:15:35.548 fused_ordering(809) 00:15:35.548 fused_ordering(810) 00:15:35.548 fused_ordering(811) 00:15:35.548 fused_ordering(812) 00:15:35.548 fused_ordering(813) 00:15:35.548 fused_ordering(814) 00:15:35.548 fused_ordering(815) 00:15:35.548 fused_ordering(816) 00:15:35.548 fused_ordering(817) 00:15:35.548 fused_ordering(818) 00:15:35.548 fused_ordering(819) 00:15:35.548 fused_ordering(820) 00:15:36.490 fused_ordering(821) 00:15:36.490 fused_ordering(822) 00:15:36.490 fused_ordering(823) 00:15:36.490 fused_ordering(824) 00:15:36.490 fused_ordering(825) 00:15:36.490 fused_ordering(826) 00:15:36.490 fused_ordering(827) 00:15:36.490 fused_ordering(828) 00:15:36.490 fused_ordering(829) 00:15:36.490 fused_ordering(830) 00:15:36.490 fused_ordering(831) 00:15:36.490 fused_ordering(832) 00:15:36.490 fused_ordering(833) 00:15:36.490 fused_ordering(834) 00:15:36.490 fused_ordering(835) 00:15:36.490 fused_ordering(836) 00:15:36.490 fused_ordering(837) 00:15:36.490 fused_ordering(838) 00:15:36.490 fused_ordering(839) 00:15:36.490 fused_ordering(840) 00:15:36.490 fused_ordering(841) 00:15:36.490 fused_ordering(842) 00:15:36.490 fused_ordering(843) 00:15:36.490 fused_ordering(844) 00:15:36.490 fused_ordering(845) 00:15:36.490 fused_ordering(846) 00:15:36.490 fused_ordering(847) 00:15:36.490 fused_ordering(848) 00:15:36.490 fused_ordering(849) 00:15:36.490 fused_ordering(850) 00:15:36.490 fused_ordering(851) 00:15:36.490 fused_ordering(852) 00:15:36.490 fused_ordering(853) 00:15:36.490 fused_ordering(854) 00:15:36.491 fused_ordering(855) 00:15:36.491 fused_ordering(856) 00:15:36.491 fused_ordering(857) 00:15:36.491 fused_ordering(858) 00:15:36.491 fused_ordering(859) 00:15:36.491 fused_ordering(860) 00:15:36.491 fused_ordering(861) 00:15:36.491 fused_ordering(862) 00:15:36.491 fused_ordering(863) 00:15:36.491 fused_ordering(864) 00:15:36.491 fused_ordering(865) 00:15:36.491 fused_ordering(866) 00:15:36.491 fused_ordering(867) 00:15:36.491 fused_ordering(868) 00:15:36.491 fused_ordering(869) 00:15:36.491 fused_ordering(870) 00:15:36.491 fused_ordering(871) 00:15:36.491 fused_ordering(872) 00:15:36.491 fused_ordering(873) 00:15:36.491 fused_ordering(874) 00:15:36.491 fused_ordering(875) 00:15:36.491 fused_ordering(876) 00:15:36.491 fused_ordering(877) 00:15:36.491 fused_ordering(878) 00:15:36.491 fused_ordering(879) 00:15:36.491 fused_ordering(880) 00:15:36.491 fused_ordering(881) 00:15:36.491 fused_ordering(882) 00:15:36.491 fused_ordering(883) 00:15:36.491 fused_ordering(884) 00:15:36.491 fused_ordering(885) 00:15:36.491 fused_ordering(886) 00:15:36.491 fused_ordering(887) 00:15:36.491 fused_ordering(888) 00:15:36.491 fused_ordering(889) 00:15:36.491 fused_ordering(890) 00:15:36.491 fused_ordering(891) 00:15:36.491 fused_ordering(892) 00:15:36.491 fused_ordering(893) 00:15:36.491 fused_ordering(894) 00:15:36.491 fused_ordering(895) 00:15:36.491 fused_ordering(896) 00:15:36.491 fused_ordering(897) 00:15:36.491 fused_ordering(898) 00:15:36.491 fused_ordering(899) 00:15:36.491 fused_ordering(900) 00:15:36.491 fused_ordering(901) 00:15:36.491 fused_ordering(902) 00:15:36.491 fused_ordering(903) 00:15:36.491 fused_ordering(904) 00:15:36.491 fused_ordering(905) 00:15:36.491 fused_ordering(906) 00:15:36.491 fused_ordering(907) 00:15:36.491 fused_ordering(908) 00:15:36.491 fused_ordering(909) 00:15:36.491 fused_ordering(910) 00:15:36.491 fused_ordering(911) 00:15:36.491 fused_ordering(912) 00:15:36.491 fused_ordering(913) 00:15:36.491 fused_ordering(914) 00:15:36.491 fused_ordering(915) 00:15:36.491 fused_ordering(916) 00:15:36.491 fused_ordering(917) 00:15:36.491 fused_ordering(918) 00:15:36.491 fused_ordering(919) 00:15:36.491 fused_ordering(920) 00:15:36.491 fused_ordering(921) 00:15:36.491 fused_ordering(922) 00:15:36.491 fused_ordering(923) 00:15:36.491 fused_ordering(924) 00:15:36.491 fused_ordering(925) 00:15:36.491 fused_ordering(926) 00:15:36.491 fused_ordering(927) 00:15:36.491 fused_ordering(928) 00:15:36.491 fused_ordering(929) 00:15:36.491 fused_ordering(930) 00:15:36.491 fused_ordering(931) 00:15:36.491 fused_ordering(932) 00:15:36.491 fused_ordering(933) 00:15:36.491 fused_ordering(934) 00:15:36.491 fused_ordering(935) 00:15:36.491 fused_ordering(936) 00:15:36.491 fused_ordering(937) 00:15:36.491 fused_ordering(938) 00:15:36.491 fused_ordering(939) 00:15:36.491 fused_ordering(940) 00:15:36.491 fused_ordering(941) 00:15:36.491 fused_ordering(942) 00:15:36.491 fused_ordering(943) 00:15:36.491 fused_ordering(944) 00:15:36.491 fused_ordering(945) 00:15:36.491 fused_ordering(946) 00:15:36.491 fused_ordering(947) 00:15:36.491 fused_ordering(948) 00:15:36.491 fused_ordering(949) 00:15:36.491 fused_ordering(950) 00:15:36.491 fused_ordering(951) 00:15:36.491 fused_ordering(952) 00:15:36.491 fused_ordering(953) 00:15:36.491 fused_ordering(954) 00:15:36.491 fused_ordering(955) 00:15:36.491 fused_ordering(956) 00:15:36.491 fused_ordering(957) 00:15:36.491 fused_ordering(958) 00:15:36.491 fused_ordering(959) 00:15:36.491 fused_ordering(960) 00:15:36.491 fused_ordering(961) 00:15:36.491 fused_ordering(962) 00:15:36.491 fused_ordering(963) 00:15:36.491 fused_ordering(964) 00:15:36.491 fused_ordering(965) 00:15:36.491 fused_ordering(966) 00:15:36.491 fused_ordering(967) 00:15:36.491 fused_ordering(968) 00:15:36.491 fused_ordering(969) 00:15:36.491 fused_ordering(970) 00:15:36.491 fused_ordering(971) 00:15:36.491 fused_ordering(972) 00:15:36.491 fused_ordering(973) 00:15:36.491 fused_ordering(974) 00:15:36.491 fused_ordering(975) 00:15:36.491 fused_ordering(976) 00:15:36.491 fused_ordering(977) 00:15:36.491 fused_ordering(978) 00:15:36.491 fused_ordering(979) 00:15:36.491 fused_ordering(980) 00:15:36.491 fused_ordering(981) 00:15:36.491 fused_ordering(982) 00:15:36.491 fused_ordering(983) 00:15:36.491 fused_ordering(984) 00:15:36.491 fused_ordering(985) 00:15:36.491 fused_ordering(986) 00:15:36.491 fused_ordering(987) 00:15:36.491 fused_ordering(988) 00:15:36.491 fused_ordering(989) 00:15:36.491 fused_ordering(990) 00:15:36.491 fused_ordering(991) 00:15:36.491 fused_ordering(992) 00:15:36.491 fused_ordering(993) 00:15:36.491 fused_ordering(994) 00:15:36.491 fused_ordering(995) 00:15:36.491 fused_ordering(996) 00:15:36.491 fused_ordering(997) 00:15:36.491 fused_ordering(998) 00:15:36.491 fused_ordering(999) 00:15:36.491 fused_ordering(1000) 00:15:36.491 fused_ordering(1001) 00:15:36.491 fused_ordering(1002) 00:15:36.491 fused_ordering(1003) 00:15:36.491 fused_ordering(1004) 00:15:36.491 fused_ordering(1005) 00:15:36.491 fused_ordering(1006) 00:15:36.491 fused_ordering(1007) 00:15:36.491 fused_ordering(1008) 00:15:36.491 fused_ordering(1009) 00:15:36.491 fused_ordering(1010) 00:15:36.491 fused_ordering(1011) 00:15:36.491 fused_ordering(1012) 00:15:36.491 fused_ordering(1013) 00:15:36.491 fused_ordering(1014) 00:15:36.491 fused_ordering(1015) 00:15:36.491 fused_ordering(1016) 00:15:36.491 fused_ordering(1017) 00:15:36.491 fused_ordering(1018) 00:15:36.491 fused_ordering(1019) 00:15:36.491 fused_ordering(1020) 00:15:36.491 fused_ordering(1021) 00:15:36.491 fused_ordering(1022) 00:15:36.491 fused_ordering(1023) 00:15:36.491 23:11:58 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:36.491 23:11:58 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:36.491 23:11:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.491 23:11:58 -- nvmf/common.sh@116 -- # sync 00:15:36.491 23:11:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:36.491 23:11:58 -- nvmf/common.sh@119 -- # set +e 00:15:36.491 23:11:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:36.491 23:11:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:36.491 rmmod nvme_tcp 00:15:36.491 rmmod nvme_fabrics 00:15:36.491 rmmod nvme_keyring 00:15:36.491 23:11:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:36.491 23:11:58 -- nvmf/common.sh@123 -- # set -e 00:15:36.491 23:11:58 -- nvmf/common.sh@124 -- # return 0 00:15:36.491 23:11:58 -- nvmf/common.sh@477 -- # '[' -n 2764550 ']' 00:15:36.491 23:11:58 -- nvmf/common.sh@478 -- # killprocess 2764550 00:15:36.491 23:11:58 -- common/autotest_common.sh@926 -- # '[' -z 2764550 ']' 00:15:36.491 23:11:58 -- common/autotest_common.sh@930 -- # kill -0 2764550 00:15:36.491 23:11:58 -- common/autotest_common.sh@931 -- # uname 00:15:36.491 23:11:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:36.491 23:11:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2764550 00:15:36.491 23:11:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:36.491 23:11:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:36.491 23:11:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2764550' 00:15:36.491 killing process with pid 2764550 00:15:36.491 23:11:58 -- common/autotest_common.sh@945 -- # kill 2764550 00:15:36.491 23:11:58 -- common/autotest_common.sh@950 -- # wait 2764550 00:15:36.491 23:11:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:36.491 23:11:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:36.491 23:11:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:36.492 23:11:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.492 23:11:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:36.492 23:11:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.492 23:11:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.492 23:11:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.037 23:12:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:39.037 00:15:39.037 real 0m13.221s 00:15:39.037 user 0m7.116s 00:15:39.037 sys 0m6.964s 00:15:39.037 23:12:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.037 23:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:39.037 ************************************ 00:15:39.037 END TEST nvmf_fused_ordering 00:15:39.037 ************************************ 00:15:39.037 23:12:01 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:39.037 23:12:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:39.037 23:12:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.037 23:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:39.037 ************************************ 00:15:39.037 START TEST nvmf_delete_subsystem 00:15:39.037 ************************************ 00:15:39.037 23:12:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:39.037 * Looking for test storage... 00:15:39.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.037 23:12:01 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.037 23:12:01 -- nvmf/common.sh@7 -- # uname -s 00:15:39.037 23:12:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.037 23:12:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.037 23:12:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.037 23:12:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.037 23:12:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.037 23:12:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.037 23:12:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.037 23:12:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.037 23:12:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.037 23:12:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.037 23:12:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.037 23:12:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.037 23:12:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.037 23:12:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.037 23:12:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.037 23:12:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.037 23:12:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.037 23:12:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.037 23:12:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.037 23:12:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.037 23:12:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.037 23:12:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.037 23:12:01 -- paths/export.sh@5 -- # export PATH 00:15:39.037 23:12:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.037 23:12:01 -- nvmf/common.sh@46 -- # : 0 00:15:39.037 23:12:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.037 23:12:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.037 23:12:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.037 23:12:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.037 23:12:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.037 23:12:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.037 23:12:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.037 23:12:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.037 23:12:01 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:39.037 23:12:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.037 23:12:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.037 23:12:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.037 23:12:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.037 23:12:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.037 23:12:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.037 23:12:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.037 23:12:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.037 23:12:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:39.037 23:12:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:39.038 23:12:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:39.038 23:12:01 -- common/autotest_common.sh@10 -- # set +x 00:15:47.183 23:12:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.183 23:12:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:47.183 23:12:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:47.183 23:12:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:47.183 23:12:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:47.183 23:12:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:47.183 23:12:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:47.183 23:12:08 -- nvmf/common.sh@294 -- # net_devs=() 00:15:47.183 23:12:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:47.183 23:12:08 -- nvmf/common.sh@295 -- # e810=() 00:15:47.183 23:12:08 -- nvmf/common.sh@295 -- # local -ga e810 00:15:47.183 23:12:08 -- nvmf/common.sh@296 -- # x722=() 00:15:47.183 23:12:08 -- nvmf/common.sh@296 -- # local -ga x722 00:15:47.183 23:12:08 -- nvmf/common.sh@297 -- # mlx=() 00:15:47.183 23:12:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:47.183 23:12:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.183 23:12:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.183 23:12:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.183 23:12:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.183 23:12:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.183 23:12:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.184 23:12:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.184 23:12:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:47.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:47.184 23:12:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:47.184 23:12:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:47.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:47.184 23:12:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.184 23:12:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.184 23:12:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.184 23:12:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:47.184 Found net devices under 0000:31:00.0: cvl_0_0 00:15:47.184 23:12:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:47.184 23:12:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.184 23:12:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.184 23:12:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:47.184 Found net devices under 0000:31:00.1: cvl_0_1 00:15:47.184 23:12:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:47.184 23:12:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:47.184 23:12:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.184 23:12:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.184 23:12:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:47.184 23:12:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.184 23:12:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.184 23:12:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:47.184 23:12:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.184 23:12:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.184 23:12:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:47.184 23:12:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:47.184 23:12:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.184 23:12:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.184 23:12:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.184 23:12:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.184 23:12:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:47.184 23:12:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.184 23:12:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.184 23:12:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.184 23:12:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:47.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.837 ms 00:15:47.184 00:15:47.184 --- 10.0.0.2 ping statistics --- 00:15:47.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.184 rtt min/avg/max/mdev = 0.837/0.837/0.837/0.000 ms 00:15:47.184 23:12:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:15:47.184 00:15:47.184 --- 10.0.0.1 ping statistics --- 00:15:47.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.184 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:15:47.184 23:12:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.184 23:12:08 -- nvmf/common.sh@410 -- # return 0 00:15:47.184 23:12:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:47.184 23:12:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.184 23:12:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:47.184 23:12:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.184 23:12:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:47.184 23:12:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:47.184 23:12:08 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:47.184 23:12:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:47.184 23:12:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:47.184 23:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 23:12:08 -- nvmf/common.sh@469 -- # nvmfpid=2769818 00:15:47.184 23:12:08 -- nvmf/common.sh@470 -- # waitforlisten 2769818 00:15:47.184 23:12:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:47.184 23:12:08 -- common/autotest_common.sh@819 -- # '[' -z 2769818 ']' 00:15:47.184 23:12:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.184 23:12:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:47.184 23:12:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.184 23:12:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:47.184 23:12:08 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [2024-06-07 23:12:08.776920] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:47.184 [2024-06-07 23:12:08.776983] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.184 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.184 [2024-06-07 23:12:08.850783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.184 [2024-06-07 23:12:08.888727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.184 [2024-06-07 23:12:08.888863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.184 [2024-06-07 23:12:08.888872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.184 [2024-06-07 23:12:08.888880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.184 [2024-06-07 23:12:08.889023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.184 [2024-06-07 23:12:08.889025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.184 23:12:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.184 23:12:09 -- common/autotest_common.sh@852 -- # return 0 00:15:47.184 23:12:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.184 23:12:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 23:12:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [2024-06-07 23:12:09.584828] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 [2024-06-07 23:12:09.609021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 NULL1 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 Delay0 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.184 23:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.184 23:12:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.184 23:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@28 -- # perf_pid=2770115 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:47.184 23:12:09 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:47.184 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.184 [2024-06-07 23:12:09.685639] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:49.118 23:12:11 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.118 23:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.118 23:12:11 -- common/autotest_common.sh@10 -- # set +x 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 starting I/O failed: -6 00:15:49.380 [2024-06-07 23:12:11.938658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5bf0 is same with the state(5) to be set 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Read completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.380 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 Write completed with error (sct=0, sc=8) 00:15:49.381 Read completed with error (sct=0, sc=8) 00:15:49.381 starting I/O failed: -6 00:15:49.381 starting I/O failed: -6 00:15:49.381 starting I/O failed: -6 00:15:49.381 starting I/O failed: -6 00:15:50.324 [2024-06-07 23:12:12.910321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2670 is same with the state(5) to be set 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Write completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 Read completed with error (sct=0, sc=8) 00:15:50.324 [2024-06-07 23:12:12.942293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d52c0 is same with the state(5) to be set 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 [2024-06-07 23:12:12.942398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5ea0 is same with the state(5) to be set 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 [2024-06-07 23:12:12.946364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe19000bf20 is same with the state(5) to be set 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 Read completed with error (sct=0, sc=8) 00:15:50.325 Write completed with error (sct=0, sc=8) 00:15:50.325 [2024-06-07 23:12:12.946541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe19000c600 is same with the state(5) to be set 00:15:50.325 [2024-06-07 23:12:12.947087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d2670 (9): Bad file descriptor 00:15:50.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:50.325 Initializing NVMe Controllers 00:15:50.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.325 Controller IO queue size 128, less than required. 00:15:50.325 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:50.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:50.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:50.325 Initialization complete. Launching workers. 00:15:50.325 ======================================================== 00:15:50.325 Latency(us) 00:15:50.325 Device Information : IOPS MiB/s Average min max 00:15:50.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.42 0.08 913479.90 217.42 1005654.74 00:15:50.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.86 0.09 962382.03 392.71 2001656.71 00:15:50.325 ======================================================== 00:15:50.325 Total : 340.28 0.17 939183.95 217.42 2001656.71 00:15:50.325 00:15:50.325 23:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.325 23:12:12 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:50.325 23:12:12 -- target/delete_subsystem.sh@35 -- # kill -0 2770115 00:15:50.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2770115) - No such process 00:15:50.325 23:12:12 -- target/delete_subsystem.sh@45 -- # NOT wait 2770115 00:15:50.325 23:12:12 -- common/autotest_common.sh@640 -- # local es=0 00:15:50.325 23:12:12 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2770115 00:15:50.325 23:12:12 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:50.325 23:12:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.325 23:12:12 -- common/autotest_common.sh@632 -- # type -t wait 00:15:50.325 23:12:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:50.325 23:12:12 -- common/autotest_common.sh@643 -- # wait 2770115 00:15:50.325 23:12:12 -- common/autotest_common.sh@643 -- # es=1 00:15:50.325 23:12:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:50.325 23:12:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:50.325 23:12:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:50.325 23:12:12 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:50.325 23:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.325 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.326 23:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.326 23:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.326 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.326 [2024-06-07 23:12:12.978841] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.326 23:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.326 23:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.326 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:15:50.326 23:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@54 -- # perf_pid=2770916 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:50.326 23:12:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:50.586 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.586 [2024-06-07 23:12:13.047195] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.847 23:12:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:50.847 23:12:13 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:50.847 23:12:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.420 23:12:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.420 23:12:14 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:51.420 23:12:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.991 23:12:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.991 23:12:14 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:51.991 23:12:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:52.561 23:12:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:52.561 23:12:15 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:52.561 23:12:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.136 23:12:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.136 23:12:15 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:53.136 23:12:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.399 23:12:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.399 23:12:16 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:53.399 23:12:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.662 Initializing NVMe Controllers 00:15:53.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.662 Controller IO queue size 128, less than required. 00:15:53.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:53.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:53.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:53.662 Initialization complete. Launching workers. 00:15:53.662 ======================================================== 00:15:53.662 Latency(us) 00:15:53.662 Device Information : IOPS MiB/s Average min max 00:15:53.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001889.75 1000173.77 1041285.88 00:15:53.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002900.85 1000346.89 1008843.01 00:15:53.662 ======================================================== 00:15:53.662 Total : 256.00 0.12 1002395.30 1000173.77 1041285.88 00:15:53.662 00:15:53.923 23:12:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.923 23:12:16 -- target/delete_subsystem.sh@57 -- # kill -0 2770916 00:15:53.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2770916) - No such process 00:15:53.923 23:12:16 -- target/delete_subsystem.sh@67 -- # wait 2770916 00:15:53.923 23:12:16 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:53.923 23:12:16 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:53.923 23:12:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.923 23:12:16 -- nvmf/common.sh@116 -- # sync 00:15:53.923 23:12:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.923 23:12:16 -- nvmf/common.sh@119 -- # set +e 00:15:53.923 23:12:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.923 23:12:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.923 rmmod nvme_tcp 00:15:53.923 rmmod nvme_fabrics 00:15:53.923 rmmod nvme_keyring 00:15:53.923 23:12:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:53.923 23:12:16 -- nvmf/common.sh@123 -- # set -e 00:15:53.923 23:12:16 -- nvmf/common.sh@124 -- # return 0 00:15:53.923 23:12:16 -- nvmf/common.sh@477 -- # '[' -n 2769818 ']' 00:15:53.923 23:12:16 -- nvmf/common.sh@478 -- # killprocess 2769818 00:15:53.923 23:12:16 -- common/autotest_common.sh@926 -- # '[' -z 2769818 ']' 00:15:53.923 23:12:16 -- common/autotest_common.sh@930 -- # kill -0 2769818 00:15:53.923 23:12:16 -- common/autotest_common.sh@931 -- # uname 00:15:53.923 23:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.923 23:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2769818 00:15:54.184 23:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:54.184 23:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:54.184 23:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2769818' 00:15:54.184 killing process with pid 2769818 00:15:54.184 23:12:16 -- common/autotest_common.sh@945 -- # kill 2769818 00:15:54.184 23:12:16 -- common/autotest_common.sh@950 -- # wait 2769818 00:15:54.184 23:12:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:54.184 23:12:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:54.184 23:12:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:54.185 23:12:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.185 23:12:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:54.185 23:12:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.185 23:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.185 23:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.733 23:12:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:56.733 00:15:56.733 real 0m17.620s 00:15:56.733 user 0m29.916s 00:15:56.733 sys 0m6.367s 00:15:56.733 23:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.733 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.733 ************************************ 00:15:56.733 END TEST nvmf_delete_subsystem 00:15:56.733 ************************************ 00:15:56.733 23:12:18 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:56.733 23:12:18 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.733 23:12:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:56.733 23:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.733 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:15:56.733 ************************************ 00:15:56.733 START TEST nvmf_nvme_cli 00:15:56.733 ************************************ 00:15:56.733 23:12:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:56.733 * Looking for test storage... 00:15:56.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.733 23:12:18 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.733 23:12:18 -- nvmf/common.sh@7 -- # uname -s 00:15:56.733 23:12:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.733 23:12:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.733 23:12:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.733 23:12:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.733 23:12:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.733 23:12:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.733 23:12:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.733 23:12:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.733 23:12:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.733 23:12:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.733 23:12:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.733 23:12:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.733 23:12:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.733 23:12:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.733 23:12:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.733 23:12:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.733 23:12:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.733 23:12:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.733 23:12:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.733 23:12:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.733 23:12:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.733 23:12:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.733 23:12:18 -- paths/export.sh@5 -- # export PATH 00:15:56.733 23:12:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.733 23:12:18 -- nvmf/common.sh@46 -- # : 0 00:15:56.733 23:12:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.733 23:12:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.733 23:12:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.733 23:12:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.733 23:12:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.733 23:12:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.733 23:12:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.733 23:12:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.733 23:12:19 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.733 23:12:19 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.733 23:12:19 -- target/nvme_cli.sh@14 -- # devs=() 00:15:56.733 23:12:19 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:56.734 23:12:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:56.734 23:12:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.734 23:12:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:56.734 23:12:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:56.734 23:12:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:56.734 23:12:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.734 23:12:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.734 23:12:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.734 23:12:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:56.734 23:12:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:56.734 23:12:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:56.734 23:12:19 -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 23:12:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:03.418 23:12:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:03.418 23:12:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:03.418 23:12:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:03.418 23:12:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:03.418 23:12:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:03.418 23:12:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:03.418 23:12:25 -- nvmf/common.sh@294 -- # net_devs=() 00:16:03.418 23:12:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:03.418 23:12:25 -- nvmf/common.sh@295 -- # e810=() 00:16:03.418 23:12:25 -- nvmf/common.sh@295 -- # local -ga e810 00:16:03.418 23:12:25 -- nvmf/common.sh@296 -- # x722=() 00:16:03.419 23:12:25 -- nvmf/common.sh@296 -- # local -ga x722 00:16:03.419 23:12:25 -- nvmf/common.sh@297 -- # mlx=() 00:16:03.419 23:12:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:03.419 23:12:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.419 23:12:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:03.419 23:12:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:03.419 23:12:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.419 23:12:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:03.419 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:03.419 23:12:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.419 23:12:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:03.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:03.419 23:12:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.419 23:12:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.419 23:12:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.419 23:12:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:03.419 Found net devices under 0000:31:00.0: cvl_0_0 00:16:03.419 23:12:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.419 23:12:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.419 23:12:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.419 23:12:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.419 23:12:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:03.419 Found net devices under 0000:31:00.1: cvl_0_1 00:16:03.419 23:12:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.419 23:12:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:03.419 23:12:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:03.419 23:12:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:03.419 23:12:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.419 23:12:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.419 23:12:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.419 23:12:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:03.419 23:12:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.419 23:12:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.419 23:12:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:03.419 23:12:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.419 23:12:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.419 23:12:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:03.419 23:12:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:03.419 23:12:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.419 23:12:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.419 23:12:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.419 23:12:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.419 23:12:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:03.685 23:12:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.685 23:12:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.685 23:12:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.685 23:12:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:03.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:16:03.686 00:16:03.686 --- 10.0.0.2 ping statistics --- 00:16:03.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.686 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:16:03.686 23:12:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:16:03.686 00:16:03.686 --- 10.0.0.1 ping statistics --- 00:16:03.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.686 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:16:03.686 23:12:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.686 23:12:26 -- nvmf/common.sh@410 -- # return 0 00:16:03.686 23:12:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.686 23:12:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.686 23:12:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:03.686 23:12:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:03.686 23:12:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.686 23:12:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:03.686 23:12:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:03.686 23:12:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:03.686 23:12:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.686 23:12:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:03.686 23:12:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.686 23:12:26 -- nvmf/common.sh@469 -- # nvmfpid=2775891 00:16:03.686 23:12:26 -- nvmf/common.sh@470 -- # waitforlisten 2775891 00:16:03.686 23:12:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.686 23:12:26 -- common/autotest_common.sh@819 -- # '[' -z 2775891 ']' 00:16:03.686 23:12:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.686 23:12:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.686 23:12:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.686 23:12:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.686 23:12:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.686 [2024-06-07 23:12:26.318383] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:03.686 [2024-06-07 23:12:26.318437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.686 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.947 [2024-06-07 23:12:26.388954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.947 [2024-06-07 23:12:26.421003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.947 [2024-06-07 23:12:26.421145] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.947 [2024-06-07 23:12:26.421156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.947 [2024-06-07 23:12:26.421164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.947 [2024-06-07 23:12:26.421277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.947 [2024-06-07 23:12:26.421383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.947 [2024-06-07 23:12:26.421646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.947 [2024-06-07 23:12:26.421647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.519 23:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.519 23:12:27 -- common/autotest_common.sh@852 -- # return 0 00:16:04.519 23:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.519 23:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:04.519 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.519 23:12:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.519 23:12:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.519 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.519 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.519 [2024-06-07 23:12:27.188699] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.519 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.519 23:12:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.519 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.519 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 Malloc0 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.780 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 Malloc1 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.780 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.780 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.780 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.780 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 [2024-06-07 23:12:27.278631] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.780 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.780 23:12:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:04.780 23:12:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.781 23:12:27 -- common/autotest_common.sh@10 -- # set +x 00:16:04.781 23:12:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.781 23:12:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:04.781 00:16:04.781 Discovery Log Number of Records 2, Generation counter 2 00:16:04.781 =====Discovery Log Entry 0====== 00:16:04.781 trtype: tcp 00:16:04.781 adrfam: ipv4 00:16:04.781 subtype: current discovery subsystem 00:16:04.781 treq: not required 00:16:04.781 portid: 0 00:16:04.781 trsvcid: 4420 00:16:04.781 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:04.781 traddr: 10.0.0.2 00:16:04.781 eflags: explicit discovery connections, duplicate discovery information 00:16:04.781 sectype: none 00:16:04.781 =====Discovery Log Entry 1====== 00:16:04.781 trtype: tcp 00:16:04.781 adrfam: ipv4 00:16:04.781 subtype: nvme subsystem 00:16:04.781 treq: not required 00:16:04.781 portid: 0 00:16:04.781 trsvcid: 4420 00:16:04.781 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:04.781 traddr: 10.0.0.2 00:16:04.781 eflags: none 00:16:04.781 sectype: none 00:16:04.781 23:12:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:04.781 23:12:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:04.781 23:12:27 -- nvmf/common.sh@510 -- # local dev _ 00:16:04.781 23:12:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.781 23:12:27 -- nvmf/common.sh@509 -- # nvme list 00:16:04.781 23:12:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:04.781 23:12:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.781 23:12:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.781 23:12:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.781 23:12:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:04.781 23:12:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.692 23:12:28 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:06.692 23:12:28 -- common/autotest_common.sh@1177 -- # local i=0 00:16:06.692 23:12:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.692 23:12:28 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:06.692 23:12:28 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:06.692 23:12:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:08.605 23:12:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:08.605 23:12:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:08.605 23:12:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.605 23:12:30 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:08.605 23:12:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.605 23:12:30 -- common/autotest_common.sh@1187 -- # return 0 00:16:08.605 23:12:30 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:08.605 23:12:30 -- nvmf/common.sh@510 -- # local dev _ 00:16:08.605 23:12:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:30 -- nvmf/common.sh@509 -- # nvme list 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:08.605 /dev/nvme0n1 ]] 00:16:08.605 23:12:31 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:08.605 23:12:31 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:08.605 23:12:31 -- nvmf/common.sh@510 -- # local dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@509 -- # nvme list 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.605 23:12:31 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:08.605 23:12:31 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:08.605 23:12:31 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:08.605 23:12:31 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.867 23:12:31 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.867 23:12:31 -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.867 23:12:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:08.867 23:12:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.867 23:12:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:08.867 23:12:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.867 23:12:31 -- common/autotest_common.sh@1210 -- # return 0 00:16:08.867 23:12:31 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:08.867 23:12:31 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.867 23:12:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:08.867 23:12:31 -- common/autotest_common.sh@10 -- # set +x 00:16:08.867 23:12:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:08.867 23:12:31 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:08.867 23:12:31 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:08.867 23:12:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:08.867 23:12:31 -- nvmf/common.sh@116 -- # sync 00:16:08.867 23:12:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:08.867 23:12:31 -- nvmf/common.sh@119 -- # set +e 00:16:08.867 23:12:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:08.867 23:12:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:08.867 rmmod nvme_tcp 00:16:09.127 rmmod nvme_fabrics 00:16:09.127 rmmod nvme_keyring 00:16:09.127 23:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.127 23:12:31 -- nvmf/common.sh@123 -- # set -e 00:16:09.127 23:12:31 -- nvmf/common.sh@124 -- # return 0 00:16:09.127 23:12:31 -- nvmf/common.sh@477 -- # '[' -n 2775891 ']' 00:16:09.127 23:12:31 -- nvmf/common.sh@478 -- # killprocess 2775891 00:16:09.127 23:12:31 -- common/autotest_common.sh@926 -- # '[' -z 2775891 ']' 00:16:09.127 23:12:31 -- common/autotest_common.sh@930 -- # kill -0 2775891 00:16:09.127 23:12:31 -- common/autotest_common.sh@931 -- # uname 00:16:09.127 23:12:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.127 23:12:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2775891 00:16:09.127 23:12:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.127 23:12:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.127 23:12:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2775891' 00:16:09.127 killing process with pid 2775891 00:16:09.127 23:12:31 -- common/autotest_common.sh@945 -- # kill 2775891 00:16:09.127 23:12:31 -- common/autotest_common.sh@950 -- # wait 2775891 00:16:09.127 23:12:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:09.127 23:12:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:09.127 23:12:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:09.127 23:12:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.127 23:12:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:09.127 23:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.127 23:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.127 23:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.675 23:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:11.675 00:16:11.675 real 0m14.992s 00:16:11.675 user 0m23.534s 00:16:11.675 sys 0m5.951s 00:16:11.675 23:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.675 23:12:33 -- common/autotest_common.sh@10 -- # set +x 00:16:11.675 ************************************ 00:16:11.675 END TEST nvmf_nvme_cli 00:16:11.675 ************************************ 00:16:11.675 23:12:33 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:11.675 23:12:33 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:11.675 23:12:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:11.675 23:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.675 23:12:33 -- common/autotest_common.sh@10 -- # set +x 00:16:11.675 ************************************ 00:16:11.675 START TEST nvmf_vfio_user 00:16:11.675 ************************************ 00:16:11.675 23:12:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:11.675 * Looking for test storage... 00:16:11.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.675 23:12:34 -- nvmf/common.sh@7 -- # uname -s 00:16:11.675 23:12:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.675 23:12:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.675 23:12:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.675 23:12:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.675 23:12:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.675 23:12:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.675 23:12:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.675 23:12:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.675 23:12:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.675 23:12:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.675 23:12:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.675 23:12:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.675 23:12:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.675 23:12:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.675 23:12:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.675 23:12:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.675 23:12:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.675 23:12:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.675 23:12:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.675 23:12:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.675 23:12:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.675 23:12:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.675 23:12:34 -- paths/export.sh@5 -- # export PATH 00:16:11.675 23:12:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.675 23:12:34 -- nvmf/common.sh@46 -- # : 0 00:16:11.675 23:12:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.675 23:12:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.675 23:12:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.675 23:12:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.675 23:12:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.675 23:12:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.675 23:12:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.675 23:12:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2777514 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2777514' 00:16:11.675 Process pid: 2777514 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2777514 00:16:11.675 23:12:34 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:11.675 23:12:34 -- common/autotest_common.sh@819 -- # '[' -z 2777514 ']' 00:16:11.675 23:12:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.675 23:12:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.675 23:12:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.675 23:12:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.676 23:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:11.676 [2024-06-07 23:12:34.097599] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:11.676 [2024-06-07 23:12:34.097673] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.676 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.676 [2024-06-07 23:12:34.163217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.676 [2024-06-07 23:12:34.193110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.676 [2024-06-07 23:12:34.193251] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.676 [2024-06-07 23:12:34.193261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.676 [2024-06-07 23:12:34.193270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.676 [2024-06-07 23:12:34.193395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.676 [2024-06-07 23:12:34.193633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.676 [2024-06-07 23:12:34.193792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.676 [2024-06-07 23:12:34.193793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.246 23:12:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.246 23:12:34 -- common/autotest_common.sh@852 -- # return 0 00:16:12.246 23:12:34 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:13.630 23:12:35 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.630 Malloc1 00:16:13.630 23:12:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:13.892 23:12:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:13.892 23:12:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:14.152 23:12:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.152 23:12:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:14.152 23:12:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:14.413 Malloc2 00:16:14.413 23:12:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:14.413 23:12:37 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:14.673 23:12:37 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:14.936 23:12:37 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:14.936 [2024-06-07 23:12:37.383799] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:14.936 [2024-06-07 23:12:37.383839] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778212 ] 00:16:14.936 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.936 [2024-06-07 23:12:37.416880] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:14.936 [2024-06-07 23:12:37.424509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.936 [2024-06-07 23:12:37.424530] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f640b720000 00:16:14.936 [2024-06-07 23:12:37.425500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.426508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.427517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.428516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.429526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.430525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.431537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.432538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.936 [2024-06-07 23:12:37.433551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.936 [2024-06-07 23:12:37.433561] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f640a4e6000 00:16:14.936 [2024-06-07 23:12:37.434888] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:14.936 [2024-06-07 23:12:37.456407] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:14.936 [2024-06-07 23:12:37.456437] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:14.936 [2024-06-07 23:12:37.458701] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:14.936 [2024-06-07 23:12:37.458746] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:14.936 [2024-06-07 23:12:37.458826] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:14.936 [2024-06-07 23:12:37.458843] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:14.936 [2024-06-07 23:12:37.458849] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:14.936 [2024-06-07 23:12:37.459698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:14.937 [2024-06-07 23:12:37.459708] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:14.937 [2024-06-07 23:12:37.459715] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:14.937 [2024-06-07 23:12:37.460703] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:14.937 [2024-06-07 23:12:37.460713] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:14.937 [2024-06-07 23:12:37.460720] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.461705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:14.937 [2024-06-07 23:12:37.461715] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.462716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:14.937 [2024-06-07 23:12:37.462724] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:14.937 [2024-06-07 23:12:37.462729] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.462735] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.462840] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:14.937 [2024-06-07 23:12:37.462845] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.462850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:14.937 [2024-06-07 23:12:37.463718] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:14.937 [2024-06-07 23:12:37.464724] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:14.937 [2024-06-07 23:12:37.465725] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:14.937 [2024-06-07 23:12:37.466746] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:14.937 [2024-06-07 23:12:37.467737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:14.937 [2024-06-07 23:12:37.467744] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:14.937 [2024-06-07 23:12:37.467749] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467770] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:14.937 [2024-06-07 23:12:37.467782] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467796] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:14.937 [2024-06-07 23:12:37.467801] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:14.937 [2024-06-07 23:12:37.467814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.467862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.467871] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:14.937 [2024-06-07 23:12:37.467878] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:14.937 [2024-06-07 23:12:37.467882] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:14.937 [2024-06-07 23:12:37.467887] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:14.937 [2024-06-07 23:12:37.467894] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:14.937 [2024-06-07 23:12:37.467898] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:14.937 [2024-06-07 23:12:37.467903] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467912] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.467934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.467944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.937 [2024-06-07 23:12:37.467952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.937 [2024-06-07 23:12:37.467960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.937 [2024-06-07 23:12:37.467968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.937 [2024-06-07 23:12:37.467973] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467983] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.467992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.468001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.468006] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:14.937 [2024-06-07 23:12:37.468011] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468018] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468025] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.468049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.468096] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468103] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468110] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:14.937 [2024-06-07 23:12:37.468114] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:14.937 [2024-06-07 23:12:37.468121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.468131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.468140] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:14.937 [2024-06-07 23:12:37.468147] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468155] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468161] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:14.937 [2024-06-07 23:12:37.468165] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:14.937 [2024-06-07 23:12:37.468171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.468199] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468206] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468213] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:14.937 [2024-06-07 23:12:37.468217] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:14.937 [2024-06-07 23:12:37.468223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:14.937 [2024-06-07 23:12:37.468235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:14.937 [2024-06-07 23:12:37.468247] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468254] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468262] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468267] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468272] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:14.937 [2024-06-07 23:12:37.468277] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:14.937 [2024-06-07 23:12:37.468281] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:14.938 [2024-06-07 23:12:37.468286] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:14.938 [2024-06-07 23:12:37.468304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468384] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:14.938 [2024-06-07 23:12:37.468388] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:14.938 [2024-06-07 23:12:37.468392] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:14.938 [2024-06-07 23:12:37.468395] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:14.938 [2024-06-07 23:12:37.468401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:14.938 [2024-06-07 23:12:37.468409] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:14.938 [2024-06-07 23:12:37.468413] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:14.938 [2024-06-07 23:12:37.468418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468425] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:14.938 [2024-06-07 23:12:37.468429] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:14.938 [2024-06-07 23:12:37.468435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468443] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:14.938 [2024-06-07 23:12:37.468447] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:14.938 [2024-06-07 23:12:37.468452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:14.938 [2024-06-07 23:12:37.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:14.938 [2024-06-07 23:12:37.468488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:14.938 ===================================================== 00:16:14.938 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:14.938 ===================================================== 00:16:14.938 Controller Capabilities/Features 00:16:14.938 ================================ 00:16:14.938 Vendor ID: 4e58 00:16:14.938 Subsystem Vendor ID: 4e58 00:16:14.938 Serial Number: SPDK1 00:16:14.938 Model Number: SPDK bdev Controller 00:16:14.938 Firmware Version: 24.01.1 00:16:14.938 Recommended Arb Burst: 6 00:16:14.938 IEEE OUI Identifier: 8d 6b 50 00:16:14.938 Multi-path I/O 00:16:14.938 May have multiple subsystem ports: Yes 00:16:14.938 May have multiple controllers: Yes 00:16:14.938 Associated with SR-IOV VF: No 00:16:14.938 Max Data Transfer Size: 131072 00:16:14.938 Max Number of Namespaces: 32 00:16:14.938 Max Number of I/O Queues: 127 00:16:14.938 NVMe Specification Version (VS): 1.3 00:16:14.938 NVMe Specification Version (Identify): 1.3 00:16:14.938 Maximum Queue Entries: 256 00:16:14.938 Contiguous Queues Required: Yes 00:16:14.938 Arbitration Mechanisms Supported 00:16:14.938 Weighted Round Robin: Not Supported 00:16:14.938 Vendor Specific: Not Supported 00:16:14.938 Reset Timeout: 15000 ms 00:16:14.938 Doorbell Stride: 4 bytes 00:16:14.938 NVM Subsystem Reset: Not Supported 00:16:14.938 Command Sets Supported 00:16:14.938 NVM Command Set: Supported 00:16:14.938 Boot Partition: Not Supported 00:16:14.938 Memory Page Size Minimum: 4096 bytes 00:16:14.938 Memory Page Size Maximum: 4096 bytes 00:16:14.938 Persistent Memory Region: Not Supported 00:16:14.938 Optional Asynchronous Events Supported 00:16:14.938 Namespace Attribute Notices: Supported 00:16:14.938 Firmware Activation Notices: Not Supported 00:16:14.938 ANA Change Notices: Not Supported 00:16:14.938 PLE Aggregate Log Change Notices: Not Supported 00:16:14.938 LBA Status Info Alert Notices: Not Supported 00:16:14.938 EGE Aggregate Log Change Notices: Not Supported 00:16:14.938 Normal NVM Subsystem Shutdown event: Not Supported 00:16:14.938 Zone Descriptor Change Notices: Not Supported 00:16:14.938 Discovery Log Change Notices: Not Supported 00:16:14.938 Controller Attributes 00:16:14.938 128-bit Host Identifier: Supported 00:16:14.938 Non-Operational Permissive Mode: Not Supported 00:16:14.938 NVM Sets: Not Supported 00:16:14.938 Read Recovery Levels: Not Supported 00:16:14.938 Endurance Groups: Not Supported 00:16:14.938 Predictable Latency Mode: Not Supported 00:16:14.938 Traffic Based Keep ALive: Not Supported 00:16:14.938 Namespace Granularity: Not Supported 00:16:14.938 SQ Associations: Not Supported 00:16:14.938 UUID List: Not Supported 00:16:14.938 Multi-Domain Subsystem: Not Supported 00:16:14.938 Fixed Capacity Management: Not Supported 00:16:14.938 Variable Capacity Management: Not Supported 00:16:14.938 Delete Endurance Group: Not Supported 00:16:14.938 Delete NVM Set: Not Supported 00:16:14.938 Extended LBA Formats Supported: Not Supported 00:16:14.938 Flexible Data Placement Supported: Not Supported 00:16:14.938 00:16:14.938 Controller Memory Buffer Support 00:16:14.938 ================================ 00:16:14.938 Supported: No 00:16:14.938 00:16:14.938 Persistent Memory Region Support 00:16:14.938 ================================ 00:16:14.938 Supported: No 00:16:14.938 00:16:14.938 Admin Command Set Attributes 00:16:14.938 ============================ 00:16:14.938 Security Send/Receive: Not Supported 00:16:14.938 Format NVM: Not Supported 00:16:14.938 Firmware Activate/Download: Not Supported 00:16:14.938 Namespace Management: Not Supported 00:16:14.938 Device Self-Test: Not Supported 00:16:14.938 Directives: Not Supported 00:16:14.938 NVMe-MI: Not Supported 00:16:14.938 Virtualization Management: Not Supported 00:16:14.938 Doorbell Buffer Config: Not Supported 00:16:14.938 Get LBA Status Capability: Not Supported 00:16:14.938 Command & Feature Lockdown Capability: Not Supported 00:16:14.938 Abort Command Limit: 4 00:16:14.938 Async Event Request Limit: 4 00:16:14.938 Number of Firmware Slots: N/A 00:16:14.938 Firmware Slot 1 Read-Only: N/A 00:16:14.938 Firmware Activation Without Reset: N/A 00:16:14.938 Multiple Update Detection Support: N/A 00:16:14.938 Firmware Update Granularity: No Information Provided 00:16:14.938 Per-Namespace SMART Log: No 00:16:14.938 Asymmetric Namespace Access Log Page: Not Supported 00:16:14.938 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:14.938 Command Effects Log Page: Supported 00:16:14.938 Get Log Page Extended Data: Supported 00:16:14.938 Telemetry Log Pages: Not Supported 00:16:14.938 Persistent Event Log Pages: Not Supported 00:16:14.938 Supported Log Pages Log Page: May Support 00:16:14.938 Commands Supported & Effects Log Page: Not Supported 00:16:14.938 Feature Identifiers & Effects Log Page:May Support 00:16:14.938 NVMe-MI Commands & Effects Log Page: May Support 00:16:14.938 Data Area 4 for Telemetry Log: Not Supported 00:16:14.938 Error Log Page Entries Supported: 128 00:16:14.938 Keep Alive: Supported 00:16:14.938 Keep Alive Granularity: 10000 ms 00:16:14.938 00:16:14.938 NVM Command Set Attributes 00:16:14.938 ========================== 00:16:14.938 Submission Queue Entry Size 00:16:14.938 Max: 64 00:16:14.938 Min: 64 00:16:14.938 Completion Queue Entry Size 00:16:14.938 Max: 16 00:16:14.938 Min: 16 00:16:14.938 Number of Namespaces: 32 00:16:14.938 Compare Command: Supported 00:16:14.938 Write Uncorrectable Command: Not Supported 00:16:14.938 Dataset Management Command: Supported 00:16:14.938 Write Zeroes Command: Supported 00:16:14.938 Set Features Save Field: Not Supported 00:16:14.938 Reservations: Not Supported 00:16:14.938 Timestamp: Not Supported 00:16:14.938 Copy: Supported 00:16:14.938 Volatile Write Cache: Present 00:16:14.938 Atomic Write Unit (Normal): 1 00:16:14.938 Atomic Write Unit (PFail): 1 00:16:14.938 Atomic Compare & Write Unit: 1 00:16:14.938 Fused Compare & Write: Supported 00:16:14.939 Scatter-Gather List 00:16:14.939 SGL Command Set: Supported (Dword aligned) 00:16:14.939 SGL Keyed: Not Supported 00:16:14.939 SGL Bit Bucket Descriptor: Not Supported 00:16:14.939 SGL Metadata Pointer: Not Supported 00:16:14.939 Oversized SGL: Not Supported 00:16:14.939 SGL Metadata Address: Not Supported 00:16:14.939 SGL Offset: Not Supported 00:16:14.939 Transport SGL Data Block: Not Supported 00:16:14.939 Replay Protected Memory Block: Not Supported 00:16:14.939 00:16:14.939 Firmware Slot Information 00:16:14.939 ========================= 00:16:14.939 Active slot: 1 00:16:14.939 Slot 1 Firmware Revision: 24.01.1 00:16:14.939 00:16:14.939 00:16:14.939 Commands Supported and Effects 00:16:14.939 ============================== 00:16:14.939 Admin Commands 00:16:14.939 -------------- 00:16:14.939 Get Log Page (02h): Supported 00:16:14.939 Identify (06h): Supported 00:16:14.939 Abort (08h): Supported 00:16:14.939 Set Features (09h): Supported 00:16:14.939 Get Features (0Ah): Supported 00:16:14.939 Asynchronous Event Request (0Ch): Supported 00:16:14.939 Keep Alive (18h): Supported 00:16:14.939 I/O Commands 00:16:14.939 ------------ 00:16:14.939 Flush (00h): Supported LBA-Change 00:16:14.939 Write (01h): Supported LBA-Change 00:16:14.939 Read (02h): Supported 00:16:14.939 Compare (05h): Supported 00:16:14.939 Write Zeroes (08h): Supported LBA-Change 00:16:14.939 Dataset Management (09h): Supported LBA-Change 00:16:14.939 Copy (19h): Supported LBA-Change 00:16:14.939 Unknown (79h): Supported LBA-Change 00:16:14.939 Unknown (7Ah): Supported 00:16:14.939 00:16:14.939 Error Log 00:16:14.939 ========= 00:16:14.939 00:16:14.939 Arbitration 00:16:14.939 =========== 00:16:14.939 Arbitration Burst: 1 00:16:14.939 00:16:14.939 Power Management 00:16:14.939 ================ 00:16:14.939 Number of Power States: 1 00:16:14.939 Current Power State: Power State #0 00:16:14.939 Power State #0: 00:16:14.939 Max Power: 0.00 W 00:16:14.939 Non-Operational State: Operational 00:16:14.939 Entry Latency: Not Reported 00:16:14.939 Exit Latency: Not Reported 00:16:14.939 Relative Read Throughput: 0 00:16:14.939 Relative Read Latency: 0 00:16:14.939 Relative Write Throughput: 0 00:16:14.939 Relative Write Latency: 0 00:16:14.939 Idle Power: Not Reported 00:16:14.939 Active Power: Not Reported 00:16:14.939 Non-Operational Permissive Mode: Not Supported 00:16:14.939 00:16:14.939 Health Information 00:16:14.939 ================== 00:16:14.939 Critical Warnings: 00:16:14.939 Available Spare Space: OK 00:16:14.939 Temperature: OK 00:16:14.939 Device Reliability: OK 00:16:14.939 Read Only: No 00:16:14.939 Volatile Memory Backup: OK 00:16:14.939 Current Temperature: 0 Kelvin[2024-06-07 23:12:37.468590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:14.939 [2024-06-07 23:12:37.468598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:14.939 [2024-06-07 23:12:37.468627] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:14.939 [2024-06-07 23:12:37.468636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.939 [2024-06-07 23:12:37.468643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.939 [2024-06-07 23:12:37.468650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.939 [2024-06-07 23:12:37.468657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.939 [2024-06-07 23:12:37.468744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:14.939 [2024-06-07 23:12:37.468754] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:14.939 [2024-06-07 23:12:37.469776] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:14.939 [2024-06-07 23:12:37.469782] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:14.939 [2024-06-07 23:12:37.470755] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:14.939 [2024-06-07 23:12:37.470765] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:14.939 [2024-06-07 23:12:37.470826] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:14.939 [2024-06-07 23:12:37.475251] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:14.939 (-273 Celsius) 00:16:14.939 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:14.939 Available Spare: 0% 00:16:14.939 Available Spare Threshold: 0% 00:16:14.939 Life Percentage Used: 0% 00:16:14.939 Data Units Read: 0 00:16:14.939 Data Units Written: 0 00:16:14.939 Host Read Commands: 0 00:16:14.939 Host Write Commands: 0 00:16:14.939 Controller Busy Time: 0 minutes 00:16:14.939 Power Cycles: 0 00:16:14.939 Power On Hours: 0 hours 00:16:14.939 Unsafe Shutdowns: 0 00:16:14.939 Unrecoverable Media Errors: 0 00:16:14.939 Lifetime Error Log Entries: 0 00:16:14.939 Warning Temperature Time: 0 minutes 00:16:14.939 Critical Temperature Time: 0 minutes 00:16:14.939 00:16:14.939 Number of Queues 00:16:14.939 ================ 00:16:14.939 Number of I/O Submission Queues: 127 00:16:14.939 Number of I/O Completion Queues: 127 00:16:14.939 00:16:14.939 Active Namespaces 00:16:14.939 ================= 00:16:14.939 Namespace ID:1 00:16:14.939 Error Recovery Timeout: Unlimited 00:16:14.939 Command Set Identifier: NVM (00h) 00:16:14.939 Deallocate: Supported 00:16:14.939 Deallocated/Unwritten Error: Not Supported 00:16:14.939 Deallocated Read Value: Unknown 00:16:14.939 Deallocate in Write Zeroes: Not Supported 00:16:14.939 Deallocated Guard Field: 0xFFFF 00:16:14.939 Flush: Supported 00:16:14.939 Reservation: Supported 00:16:14.939 Namespace Sharing Capabilities: Multiple Controllers 00:16:14.939 Size (in LBAs): 131072 (0GiB) 00:16:14.939 Capacity (in LBAs): 131072 (0GiB) 00:16:14.939 Utilization (in LBAs): 131072 (0GiB) 00:16:14.939 NGUID: F6F106BD1AE64ADEA3772E6C34F654AC 00:16:14.939 UUID: f6f106bd-1ae6-4ade-a377-2e6c34f654ac 00:16:14.939 Thin Provisioning: Not Supported 00:16:14.939 Per-NS Atomic Units: Yes 00:16:14.939 Atomic Boundary Size (Normal): 0 00:16:14.939 Atomic Boundary Size (PFail): 0 00:16:14.939 Atomic Boundary Offset: 0 00:16:14.939 Maximum Single Source Range Length: 65535 00:16:14.939 Maximum Copy Length: 65535 00:16:14.939 Maximum Source Range Count: 1 00:16:14.939 NGUID/EUI64 Never Reused: No 00:16:14.939 Namespace Write Protected: No 00:16:14.939 Number of LBA Formats: 1 00:16:14.939 Current LBA Format: LBA Format #00 00:16:14.939 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:14.939 00:16:14.939 23:12:37 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:14.939 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.227 Initializing NVMe Controllers 00:16:20.227 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:20.227 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:20.227 Initialization complete. Launching workers. 00:16:20.227 ======================================================== 00:16:20.227 Latency(us) 00:16:20.227 Device Information : IOPS MiB/s Average min max 00:16:20.227 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39951.80 156.06 3204.22 835.05 6847.45 00:16:20.227 ======================================================== 00:16:20.227 Total : 39951.80 156.06 3204.22 835.05 6847.45 00:16:20.227 00:16:20.227 23:12:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:20.227 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.529 Initializing NVMe Controllers 00:16:25.529 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:25.529 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:25.529 Initialization complete. Launching workers. 00:16:25.529 ======================================================== 00:16:25.529 Latency(us) 00:16:25.529 Device Information : IOPS MiB/s Average min max 00:16:25.529 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.75 7627.29 8022.85 00:16:25.529 ======================================================== 00:16:25.529 Total : 16051.20 62.70 7980.75 7627.29 8022.85 00:16:25.529 00:16:25.529 23:12:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:25.529 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.818 Initializing NVMe Controllers 00:16:30.818 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:30.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:30.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:30.818 Initialization complete. Launching workers. 00:16:30.818 Starting thread on core 2 00:16:30.818 Starting thread on core 3 00:16:30.818 Starting thread on core 1 00:16:30.818 23:12:53 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:30.818 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.121 Initializing NVMe Controllers 00:16:34.121 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.121 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:34.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:34.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:34.121 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:34.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:34.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:34.121 Initialization complete. Launching workers. 00:16:34.121 Starting thread on core 1 with urgent priority queue 00:16:34.121 Starting thread on core 2 with urgent priority queue 00:16:34.121 Starting thread on core 3 with urgent priority queue 00:16:34.121 Starting thread on core 0 with urgent priority queue 00:16:34.121 SPDK bdev Controller (SPDK1 ) core 0: 2357.33 IO/s 42.42 secs/100000 ios 00:16:34.121 SPDK bdev Controller (SPDK1 ) core 1: 4195.67 IO/s 23.83 secs/100000 ios 00:16:34.121 SPDK bdev Controller (SPDK1 ) core 2: 2340.67 IO/s 42.72 secs/100000 ios 00:16:34.121 SPDK bdev Controller (SPDK1 ) core 3: 5206.33 IO/s 19.21 secs/100000 ios 00:16:34.121 ======================================================== 00:16:34.121 00:16:34.121 23:12:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:34.121 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.121 Initializing NVMe Controllers 00:16:34.121 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.121 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.121 Namespace ID: 1 size: 0GB 00:16:34.121 Initialization complete. 00:16:34.121 INFO: using host memory buffer for IO 00:16:34.121 Hello world! 00:16:34.121 23:12:56 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:34.382 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.768 Initializing NVMe Controllers 00:16:35.768 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.768 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.768 Initialization complete. Launching workers. 00:16:35.768 submit (in ns) avg, min, max = 7525.4, 3830.8, 4003525.0 00:16:35.768 complete (in ns) avg, min, max = 18441.7, 2347.5, 4000211.7 00:16:35.768 00:16:35.768 Submit histogram 00:16:35.768 ================ 00:16:35.768 Range in us Cumulative Count 00:16:35.768 3.813 - 3.840: 0.1143% ( 22) 00:16:35.768 3.840 - 3.867: 2.5292% ( 465) 00:16:35.768 3.867 - 3.893: 8.1330% ( 1079) 00:16:35.768 3.893 - 3.920: 17.8187% ( 1865) 00:16:35.768 3.920 - 3.947: 28.9639% ( 2146) 00:16:35.768 3.947 - 3.973: 41.1529% ( 2347) 00:16:35.768 3.973 - 4.000: 56.9722% ( 3046) 00:16:35.768 4.000 - 4.027: 73.1083% ( 3107) 00:16:35.768 4.027 - 4.053: 85.9050% ( 2464) 00:16:35.768 4.053 - 4.080: 93.3316% ( 1430) 00:16:35.768 4.080 - 4.107: 97.2267% ( 750) 00:16:35.768 4.107 - 4.133: 98.7016% ( 284) 00:16:35.768 4.133 - 4.160: 99.2469% ( 105) 00:16:35.768 4.160 - 4.187: 99.4339% ( 36) 00:16:35.768 4.187 - 4.213: 99.4858% ( 10) 00:16:35.768 4.213 - 4.240: 99.5066% ( 4) 00:16:35.768 4.240 - 4.267: 99.5170% ( 2) 00:16:35.768 4.267 - 4.293: 99.5222% ( 1) 00:16:35.768 4.373 - 4.400: 99.5274% ( 1) 00:16:35.768 4.827 - 4.853: 99.5326% ( 1) 00:16:35.768 4.987 - 5.013: 99.5378% ( 1) 00:16:35.768 5.120 - 5.147: 99.5430% ( 1) 00:16:35.768 5.280 - 5.307: 99.5482% ( 1) 00:16:35.768 5.387 - 5.413: 99.5534% ( 1) 00:16:35.768 5.440 - 5.467: 99.5586% ( 1) 00:16:35.768 5.787 - 5.813: 99.5637% ( 1) 00:16:35.768 5.893 - 5.920: 99.5689% ( 1) 00:16:35.768 5.920 - 5.947: 99.5741% ( 1) 00:16:35.768 5.947 - 5.973: 99.5793% ( 1) 00:16:35.768 5.973 - 6.000: 99.5845% ( 1) 00:16:35.768 6.000 - 6.027: 99.5949% ( 2) 00:16:35.768 6.027 - 6.053: 99.6001% ( 1) 00:16:35.768 6.053 - 6.080: 99.6105% ( 2) 00:16:35.768 6.080 - 6.107: 99.6209% ( 2) 00:16:35.768 6.107 - 6.133: 99.6313% ( 2) 00:16:35.768 6.133 - 6.160: 99.6365% ( 1) 00:16:35.768 6.187 - 6.213: 99.6417% ( 1) 00:16:35.768 6.240 - 6.267: 99.6468% ( 1) 00:16:35.768 6.267 - 6.293: 99.6520% ( 1) 00:16:35.768 6.320 - 6.347: 99.6676% ( 3) 00:16:35.768 6.373 - 6.400: 99.6728% ( 1) 00:16:35.768 6.507 - 6.533: 99.6780% ( 1) 00:16:35.768 7.093 - 7.147: 99.6832% ( 1) 00:16:35.768 7.147 - 7.200: 99.6936% ( 2) 00:16:35.768 7.253 - 7.307: 99.6988% ( 1) 00:16:35.768 7.307 - 7.360: 99.7040% ( 1) 00:16:35.768 7.360 - 7.413: 99.7144% ( 2) 00:16:35.768 7.573 - 7.627: 99.7196% ( 1) 00:16:35.768 7.627 - 7.680: 99.7299% ( 2) 00:16:35.768 7.680 - 7.733: 99.7455% ( 3) 00:16:35.768 7.733 - 7.787: 99.7611% ( 3) 00:16:35.768 7.840 - 7.893: 99.7819% ( 4) 00:16:35.768 7.893 - 7.947: 99.7923% ( 2) 00:16:35.768 8.000 - 8.053: 99.8078% ( 3) 00:16:35.768 8.053 - 8.107: 99.8130% ( 1) 00:16:35.768 8.107 - 8.160: 99.8182% ( 1) 00:16:35.768 8.427 - 8.480: 99.8286% ( 2) 00:16:35.768 8.480 - 8.533: 99.8390% ( 2) 00:16:35.768 8.533 - 8.587: 99.8442% ( 1) 00:16:35.768 8.587 - 8.640: 99.8494% ( 1) 00:16:35.768 8.693 - 8.747: 99.8546% ( 1) 00:16:35.768 8.747 - 8.800: 99.8650% ( 2) 00:16:35.768 8.853 - 8.907: 99.8702% ( 1) 00:16:35.768 8.907 - 8.960: 99.8754% ( 1) 00:16:35.768 9.440 - 9.493: 99.8857% ( 2) 00:16:35.768 9.600 - 9.653: 99.8909% ( 1) 00:16:35.768 10.187 - 10.240: 99.8961% ( 1) 00:16:35.768 11.147 - 11.200: 99.9013% ( 1) 00:16:35.768 12.373 - 12.427: 99.9065% ( 1) 00:16:35.768 14.720 - 14.827: 99.9117% ( 1) 00:16:35.768 3986.773 - 4014.080: 100.0000% ( 17) 00:16:35.768 00:16:35.768 Complete histogram 00:16:35.768 ================== 00:16:35.768 Range in us Cumulative Count 00:16:35.768 2.347 - 2.360: 0.0104% ( 2) 00:16:35.768 2.360 - 2.373: 0.6388% ( 121) 00:16:35.768 2.373 - 2.387: 2.4409% ( 347) 00:16:35.768 2.387 - 2.400: 2.7318% ( 56) 00:16:35.768 2.400 - 2.413: 3.1057% ( 72) 00:16:35.768 2.413 - 2.427: 9.8156% ( 1292) 00:16:35.768 2.427 - 2.440: 61.4853% ( 9949) 00:16:35.768 2.440 - 2.453: 67.5357% ( 1165) 00:16:35.768 2.453 - 2.467: 77.2215% ( 1865) 00:16:35.768 2.467 - 2.480: 81.6359% ( 850) 00:16:35.768 2.480 - 2.493: 82.8876% ( 241) 00:16:35.768 2.493 - 2.507: 88.1797% ( 1019) 00:16:35.768 2.507 - 2.520: 94.4534% ( 1208) 00:16:35.768 2.520 - 2.533: 96.8476% ( 461) 00:16:35.768 2.533 - 2.547: 98.1875% ( 258) 00:16:35.768 2.547 - 2.560: 98.9665% ( 150) 00:16:35.768 2.560 - 2.573: 99.2262% ( 50) 00:16:35.768 2.573 - 2.587: 99.2469% ( 4) 00:16:35.768 2.587 - 2.600: 99.2573% ( 2) 00:16:35.768 4.453 - 4.480: 99.2625% ( 1) 00:16:35.768 4.480 - 4.507: 99.2677% ( 1) 00:16:35.768 4.533 - 4.560: 99.2729% ( 1) 00:16:35.768 4.560 - 4.587: 99.2781% ( 1) 00:16:35.768 4.613 - 4.640: 99.2885% ( 2) 00:16:35.768 4.667 - 4.693: 99.2937% ( 1) 00:16:35.768 4.747 - 4.773: 99.3041% ( 2) 00:16:35.768 4.773 - 4.800: 99.3093% ( 1) 00:16:35.768 4.800 - 4.827: 99.3197% ( 2) 00:16:35.768 4.880 - 4.907: 99.3249% ( 1) 00:16:35.768 5.333 - 5.360: 99.3300% ( 1) 00:16:35.768 5.440 - 5.467: 99.3352% ( 1) 00:16:35.768 5.573 - 5.600: 99.3404% ( 1) 00:16:35.768 5.653 - 5.680: 99.3456% ( 1) 00:16:35.768 5.760 - 5.787: 99.3508% ( 1) 00:16:35.768 5.787 - 5.813: 99.3560% ( 1) 00:16:35.768 5.813 - 5.840: 99.3612% ( 1) 00:16:35.768 5.840 - 5.867: 99.3664% ( 1) 00:16:35.768 6.000 - 6.027: 99.3716% ( 1) 00:16:35.768 6.027 - 6.053: 99.3768% ( 1) 00:16:35.768 6.053 - 6.080: 99.3820% ( 1) 00:16:35.768 6.107 - 6.133: 99.3872% ( 1) 00:16:35.768 6.133 - 6.160: 99.3924% ( 1) 00:16:35.768 6.187 - 6.213: 99.3976% ( 1) 00:16:35.768 6.267 - 6.293: 99.4028% ( 1) 00:16:35.768 6.293 - 6.320: 99.4079% ( 1) 00:16:35.768 6.347 - 6.373: 99.4131% ( 1) 00:16:35.768 6.480 - 6.507: 99.4235% ( 2) 00:16:35.768 6.587 - 6.613: 99.4339% ( 2) 00:16:35.768 6.613 - 6.640: 99.4495% ( 3) 00:16:35.768 6.640 - 6.667: 99.4547% ( 1) 00:16:35.768 6.667 - 6.693: 99.4599% ( 1) 00:16:35.768 6.720 - 6.747: 99.4651% ( 1) 00:16:35.768 6.747 - 6.773: 99.4703% ( 1) 00:16:35.768 6.827 - 6.880: 99.4755% ( 1) 00:16:35.768 6.880 - 6.933: 99.4807% ( 1) 00:16:35.768 6.987 - 7.040: 99.4910% ( 2) 00:16:35.768 7.093 - 7.147: 99.4962% ( 1) 00:16:35.768 7.200 - 7.253: 99.5014% ( 1) 00:16:35.768 7.307 - 7.360: 99.5118% ( 2) 00:16:35.768 7.467 - 7.520: 99.5274% ( 3) 00:16:35.768 7.627 - 7.680: 99.5378% ( 2) 00:16:35.768 7.840 - 7.893: 99.5430% ( 1) 00:16:35.768 7.893 - 7.947: 99.5482% ( 1) 00:16:35.768 8.267 - 8.320: 99.5534% ( 1) 00:16:35.768 8.640 - 8.693: 99.5637% ( 2) 00:16:35.768 9.067 - 9.120: 99.5689% ( 1) 00:16:35.768 10.507 - 10.560: 99.5741% ( 1) 00:16:35.768 10.773 - 10.827: 99.5793% ( 1) 00:16:35.768 11.253 - 11.307: 99.5845% ( 1) 00:16:35.768 11.733 - 11.787: 99.5897% ( 1) 00:16:35.768 14.080 - 14.187: 99.5949% ( 1) 00:16:35.768 15.040 - 15.147: 99.6001% ( 1) 00:16:35.768 3986.773 - 4014.080: 100.0000% ( 77) 00:16:35.768 00:16:35.768 23:12:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:35.768 23:12:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:35.768 23:12:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:35.768 23:12:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:35.768 23:12:58 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:35.768 [2024-06-07 23:12:58.209446] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:35.768 [ 00:16:35.768 { 00:16:35.768 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.768 "subtype": "Discovery", 00:16:35.768 "listen_addresses": [], 00:16:35.768 "allow_any_host": true, 00:16:35.768 "hosts": [] 00:16:35.768 }, 00:16:35.768 { 00:16:35.768 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:35.768 "subtype": "NVMe", 00:16:35.768 "listen_addresses": [ 00:16:35.768 { 00:16:35.768 "transport": "VFIOUSER", 00:16:35.768 "trtype": "VFIOUSER", 00:16:35.768 "adrfam": "IPv4", 00:16:35.768 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:35.768 "trsvcid": "0" 00:16:35.768 } 00:16:35.768 ], 00:16:35.768 "allow_any_host": true, 00:16:35.768 "hosts": [], 00:16:35.768 "serial_number": "SPDK1", 00:16:35.768 "model_number": "SPDK bdev Controller", 00:16:35.769 "max_namespaces": 32, 00:16:35.769 "min_cntlid": 1, 00:16:35.769 "max_cntlid": 65519, 00:16:35.769 "namespaces": [ 00:16:35.769 { 00:16:35.769 "nsid": 1, 00:16:35.769 "bdev_name": "Malloc1", 00:16:35.769 "name": "Malloc1", 00:16:35.769 "nguid": "F6F106BD1AE64ADEA3772E6C34F654AC", 00:16:35.769 "uuid": "f6f106bd-1ae6-4ade-a377-2e6c34f654ac" 00:16:35.769 } 00:16:35.769 ] 00:16:35.769 }, 00:16:35.769 { 00:16:35.769 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:35.769 "subtype": "NVMe", 00:16:35.769 "listen_addresses": [ 00:16:35.769 { 00:16:35.769 "transport": "VFIOUSER", 00:16:35.769 "trtype": "VFIOUSER", 00:16:35.769 "adrfam": "IPv4", 00:16:35.769 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:35.769 "trsvcid": "0" 00:16:35.769 } 00:16:35.769 ], 00:16:35.769 "allow_any_host": true, 00:16:35.769 "hosts": [], 00:16:35.769 "serial_number": "SPDK2", 00:16:35.769 "model_number": "SPDK bdev Controller", 00:16:35.769 "max_namespaces": 32, 00:16:35.769 "min_cntlid": 1, 00:16:35.769 "max_cntlid": 65519, 00:16:35.769 "namespaces": [ 00:16:35.769 { 00:16:35.769 "nsid": 1, 00:16:35.769 "bdev_name": "Malloc2", 00:16:35.769 "name": "Malloc2", 00:16:35.769 "nguid": "D2EDFAF245CC4CE9ABAC997CB0BF7308", 00:16:35.769 "uuid": "d2edfaf2-45cc-4ce9-abac-997cb0bf7308" 00:16:35.769 } 00:16:35.769 ] 00:16:35.769 } 00:16:35.769 ] 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2782290 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:35.769 23:12:58 -- common/autotest_common.sh@1244 -- # local i=0 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:35.769 23:12:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.769 23:12:58 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.769 23:12:58 -- common/autotest_common.sh@1255 -- # return 0 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:35.769 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.769 Malloc3 00:16:35.769 23:12:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:36.030 23:12:58 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:36.030 Asynchronous Event Request test 00:16:36.030 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:36.030 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:36.030 Registering asynchronous event callbacks... 00:16:36.030 Starting namespace attribute notice tests for all controllers... 00:16:36.030 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:36.030 aer_cb - Changed Namespace 00:16:36.030 Cleaning up... 00:16:36.293 [ 00:16:36.293 { 00:16:36.293 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:36.293 "subtype": "Discovery", 00:16:36.293 "listen_addresses": [], 00:16:36.293 "allow_any_host": true, 00:16:36.293 "hosts": [] 00:16:36.293 }, 00:16:36.293 { 00:16:36.293 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:36.293 "subtype": "NVMe", 00:16:36.293 "listen_addresses": [ 00:16:36.293 { 00:16:36.293 "transport": "VFIOUSER", 00:16:36.293 "trtype": "VFIOUSER", 00:16:36.293 "adrfam": "IPv4", 00:16:36.293 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:36.293 "trsvcid": "0" 00:16:36.293 } 00:16:36.293 ], 00:16:36.293 "allow_any_host": true, 00:16:36.293 "hosts": [], 00:16:36.293 "serial_number": "SPDK1", 00:16:36.293 "model_number": "SPDK bdev Controller", 00:16:36.293 "max_namespaces": 32, 00:16:36.293 "min_cntlid": 1, 00:16:36.293 "max_cntlid": 65519, 00:16:36.293 "namespaces": [ 00:16:36.293 { 00:16:36.293 "nsid": 1, 00:16:36.293 "bdev_name": "Malloc1", 00:16:36.293 "name": "Malloc1", 00:16:36.293 "nguid": "F6F106BD1AE64ADEA3772E6C34F654AC", 00:16:36.293 "uuid": "f6f106bd-1ae6-4ade-a377-2e6c34f654ac" 00:16:36.293 }, 00:16:36.293 { 00:16:36.293 "nsid": 2, 00:16:36.293 "bdev_name": "Malloc3", 00:16:36.293 "name": "Malloc3", 00:16:36.293 "nguid": "E4ED2B49D78045D8812846D931DC8AA5", 00:16:36.293 "uuid": "e4ed2b49-d780-45d8-8128-46d931dc8aa5" 00:16:36.293 } 00:16:36.293 ] 00:16:36.293 }, 00:16:36.293 { 00:16:36.293 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:36.293 "subtype": "NVMe", 00:16:36.293 "listen_addresses": [ 00:16:36.293 { 00:16:36.293 "transport": "VFIOUSER", 00:16:36.293 "trtype": "VFIOUSER", 00:16:36.293 "adrfam": "IPv4", 00:16:36.293 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:36.293 "trsvcid": "0" 00:16:36.293 } 00:16:36.293 ], 00:16:36.293 "allow_any_host": true, 00:16:36.293 "hosts": [], 00:16:36.293 "serial_number": "SPDK2", 00:16:36.293 "model_number": "SPDK bdev Controller", 00:16:36.293 "max_namespaces": 32, 00:16:36.293 "min_cntlid": 1, 00:16:36.293 "max_cntlid": 65519, 00:16:36.293 "namespaces": [ 00:16:36.293 { 00:16:36.293 "nsid": 1, 00:16:36.293 "bdev_name": "Malloc2", 00:16:36.293 "name": "Malloc2", 00:16:36.293 "nguid": "D2EDFAF245CC4CE9ABAC997CB0BF7308", 00:16:36.293 "uuid": "d2edfaf2-45cc-4ce9-abac-997cb0bf7308" 00:16:36.293 } 00:16:36.293 ] 00:16:36.293 } 00:16:36.293 ] 00:16:36.293 23:12:58 -- target/nvmf_vfio_user.sh@44 -- # wait 2782290 00:16:36.293 23:12:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.293 23:12:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:36.293 23:12:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:36.293 23:12:58 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:36.293 [2024-06-07 23:12:58.770576] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:36.293 [2024-06-07 23:12:58.770618] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782325 ] 00:16:36.293 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.293 [2024-06-07 23:12:58.804778] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:36.293 [2024-06-07 23:12:58.811979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:36.293 [2024-06-07 23:12:58.811999] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0b1475a000 00:16:36.293 [2024-06-07 23:12:58.812978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.813980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.814987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.815994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.816997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.818002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.819008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.820012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.293 [2024-06-07 23:12:58.821017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:36.293 [2024-06-07 23:12:58.821029] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0b13520000 00:16:36.293 [2024-06-07 23:12:58.822361] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:36.293 [2024-06-07 23:12:58.841419] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:36.293 [2024-06-07 23:12:58.841443] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:36.293 [2024-06-07 23:12:58.846522] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:36.293 [2024-06-07 23:12:58.846569] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:36.293 [2024-06-07 23:12:58.846648] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:36.294 [2024-06-07 23:12:58.846661] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:36.294 [2024-06-07 23:12:58.846666] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:36.294 [2024-06-07 23:12:58.847524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:36.294 [2024-06-07 23:12:58.847535] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:36.294 [2024-06-07 23:12:58.847542] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:36.294 [2024-06-07 23:12:58.848527] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:36.294 [2024-06-07 23:12:58.848540] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:36.294 [2024-06-07 23:12:58.848547] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.849536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:36.294 [2024-06-07 23:12:58.849546] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.850544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:36.294 [2024-06-07 23:12:58.850552] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:36.294 [2024-06-07 23:12:58.850557] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.850565] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.850671] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:36.294 [2024-06-07 23:12:58.850675] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.850680] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:36.294 [2024-06-07 23:12:58.851551] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:36.294 [2024-06-07 23:12:58.852559] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:36.294 [2024-06-07 23:12:58.853564] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:36.294 [2024-06-07 23:12:58.854591] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:36.294 [2024-06-07 23:12:58.855586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:36.294 [2024-06-07 23:12:58.855594] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:36.294 [2024-06-07 23:12:58.855599] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.855620] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:36.294 [2024-06-07 23:12:58.855627] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.855640] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.294 [2024-06-07 23:12:58.855645] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.294 [2024-06-07 23:12:58.855657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.863263] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:36.294 [2024-06-07 23:12:58.863271] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:36.294 [2024-06-07 23:12:58.863275] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:36.294 [2024-06-07 23:12:58.863280] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:36.294 [2024-06-07 23:12:58.863284] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:36.294 [2024-06-07 23:12:58.863289] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:36.294 [2024-06-07 23:12:58.863293] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.863303] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.863314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.871250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.871263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.294 [2024-06-07 23:12:58.871272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.294 [2024-06-07 23:12:58.871280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.294 [2024-06-07 23:12:58.871288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.294 [2024-06-07 23:12:58.871293] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.871301] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.871310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.879252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.879260] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:36.294 [2024-06-07 23:12:58.879265] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.879272] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.879279] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.879297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.887301] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.887308] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.887315] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:36.294 [2024-06-07 23:12:58.887320] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:36.294 [2024-06-07 23:12:58.887326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.895249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.895259] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:36.294 [2024-06-07 23:12:58.895271] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.895279] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.895287] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.294 [2024-06-07 23:12:58.895291] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.294 [2024-06-07 23:12:58.895298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.903251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.903264] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.903272] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.903278] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.294 [2024-06-07 23:12:58.903283] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.294 [2024-06-07 23:12:58.903289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.294 [2024-06-07 23:12:58.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:36.294 [2024-06-07 23:12:58.911261] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.911267] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.911275] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.911280] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.911285] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:36.294 [2024-06-07 23:12:58.911290] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:36.295 [2024-06-07 23:12:58.911294] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:36.295 [2024-06-07 23:12:58.911299] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:36.295 [2024-06-07 23:12:58.911315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.919250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.919264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.927250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.927263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.935267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.943250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.943265] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:36.295 [2024-06-07 23:12:58.943269] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:36.295 [2024-06-07 23:12:58.943273] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:36.295 [2024-06-07 23:12:58.943276] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:36.295 [2024-06-07 23:12:58.943283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:36.295 [2024-06-07 23:12:58.943290] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:36.295 [2024-06-07 23:12:58.943294] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:36.295 [2024-06-07 23:12:58.943300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.943307] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:36.295 [2024-06-07 23:12:58.943311] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.295 [2024-06-07 23:12:58.943317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.943324] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:36.295 [2024-06-07 23:12:58.943328] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:36.295 [2024-06-07 23:12:58.943334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:36.295 [2024-06-07 23:12:58.951250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.951265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:36.295 [2024-06-07 23:12:58.951280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:36.295 ===================================================== 00:16:36.295 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:36.295 ===================================================== 00:16:36.295 Controller Capabilities/Features 00:16:36.295 ================================ 00:16:36.295 Vendor ID: 4e58 00:16:36.295 Subsystem Vendor ID: 4e58 00:16:36.295 Serial Number: SPDK2 00:16:36.295 Model Number: SPDK bdev Controller 00:16:36.295 Firmware Version: 24.01.1 00:16:36.295 Recommended Arb Burst: 6 00:16:36.295 IEEE OUI Identifier: 8d 6b 50 00:16:36.295 Multi-path I/O 00:16:36.295 May have multiple subsystem ports: Yes 00:16:36.295 May have multiple controllers: Yes 00:16:36.295 Associated with SR-IOV VF: No 00:16:36.295 Max Data Transfer Size: 131072 00:16:36.295 Max Number of Namespaces: 32 00:16:36.295 Max Number of I/O Queues: 127 00:16:36.295 NVMe Specification Version (VS): 1.3 00:16:36.295 NVMe Specification Version (Identify): 1.3 00:16:36.295 Maximum Queue Entries: 256 00:16:36.295 Contiguous Queues Required: Yes 00:16:36.295 Arbitration Mechanisms Supported 00:16:36.295 Weighted Round Robin: Not Supported 00:16:36.295 Vendor Specific: Not Supported 00:16:36.295 Reset Timeout: 15000 ms 00:16:36.295 Doorbell Stride: 4 bytes 00:16:36.295 NVM Subsystem Reset: Not Supported 00:16:36.295 Command Sets Supported 00:16:36.295 NVM Command Set: Supported 00:16:36.295 Boot Partition: Not Supported 00:16:36.295 Memory Page Size Minimum: 4096 bytes 00:16:36.295 Memory Page Size Maximum: 4096 bytes 00:16:36.295 Persistent Memory Region: Not Supported 00:16:36.295 Optional Asynchronous Events Supported 00:16:36.295 Namespace Attribute Notices: Supported 00:16:36.295 Firmware Activation Notices: Not Supported 00:16:36.295 ANA Change Notices: Not Supported 00:16:36.295 PLE Aggregate Log Change Notices: Not Supported 00:16:36.295 LBA Status Info Alert Notices: Not Supported 00:16:36.295 EGE Aggregate Log Change Notices: Not Supported 00:16:36.295 Normal NVM Subsystem Shutdown event: Not Supported 00:16:36.295 Zone Descriptor Change Notices: Not Supported 00:16:36.295 Discovery Log Change Notices: Not Supported 00:16:36.295 Controller Attributes 00:16:36.295 128-bit Host Identifier: Supported 00:16:36.295 Non-Operational Permissive Mode: Not Supported 00:16:36.295 NVM Sets: Not Supported 00:16:36.295 Read Recovery Levels: Not Supported 00:16:36.295 Endurance Groups: Not Supported 00:16:36.295 Predictable Latency Mode: Not Supported 00:16:36.295 Traffic Based Keep ALive: Not Supported 00:16:36.295 Namespace Granularity: Not Supported 00:16:36.295 SQ Associations: Not Supported 00:16:36.295 UUID List: Not Supported 00:16:36.295 Multi-Domain Subsystem: Not Supported 00:16:36.295 Fixed Capacity Management: Not Supported 00:16:36.295 Variable Capacity Management: Not Supported 00:16:36.295 Delete Endurance Group: Not Supported 00:16:36.295 Delete NVM Set: Not Supported 00:16:36.295 Extended LBA Formats Supported: Not Supported 00:16:36.295 Flexible Data Placement Supported: Not Supported 00:16:36.295 00:16:36.295 Controller Memory Buffer Support 00:16:36.295 ================================ 00:16:36.295 Supported: No 00:16:36.295 00:16:36.295 Persistent Memory Region Support 00:16:36.295 ================================ 00:16:36.295 Supported: No 00:16:36.295 00:16:36.295 Admin Command Set Attributes 00:16:36.295 ============================ 00:16:36.295 Security Send/Receive: Not Supported 00:16:36.295 Format NVM: Not Supported 00:16:36.295 Firmware Activate/Download: Not Supported 00:16:36.295 Namespace Management: Not Supported 00:16:36.295 Device Self-Test: Not Supported 00:16:36.295 Directives: Not Supported 00:16:36.295 NVMe-MI: Not Supported 00:16:36.295 Virtualization Management: Not Supported 00:16:36.295 Doorbell Buffer Config: Not Supported 00:16:36.295 Get LBA Status Capability: Not Supported 00:16:36.295 Command & Feature Lockdown Capability: Not Supported 00:16:36.295 Abort Command Limit: 4 00:16:36.295 Async Event Request Limit: 4 00:16:36.295 Number of Firmware Slots: N/A 00:16:36.295 Firmware Slot 1 Read-Only: N/A 00:16:36.295 Firmware Activation Without Reset: N/A 00:16:36.295 Multiple Update Detection Support: N/A 00:16:36.295 Firmware Update Granularity: No Information Provided 00:16:36.295 Per-Namespace SMART Log: No 00:16:36.295 Asymmetric Namespace Access Log Page: Not Supported 00:16:36.295 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:36.295 Command Effects Log Page: Supported 00:16:36.295 Get Log Page Extended Data: Supported 00:16:36.295 Telemetry Log Pages: Not Supported 00:16:36.295 Persistent Event Log Pages: Not Supported 00:16:36.295 Supported Log Pages Log Page: May Support 00:16:36.295 Commands Supported & Effects Log Page: Not Supported 00:16:36.295 Feature Identifiers & Effects Log Page:May Support 00:16:36.295 NVMe-MI Commands & Effects Log Page: May Support 00:16:36.295 Data Area 4 for Telemetry Log: Not Supported 00:16:36.295 Error Log Page Entries Supported: 128 00:16:36.295 Keep Alive: Supported 00:16:36.295 Keep Alive Granularity: 10000 ms 00:16:36.295 00:16:36.295 NVM Command Set Attributes 00:16:36.295 ========================== 00:16:36.295 Submission Queue Entry Size 00:16:36.295 Max: 64 00:16:36.295 Min: 64 00:16:36.295 Completion Queue Entry Size 00:16:36.295 Max: 16 00:16:36.295 Min: 16 00:16:36.295 Number of Namespaces: 32 00:16:36.295 Compare Command: Supported 00:16:36.295 Write Uncorrectable Command: Not Supported 00:16:36.295 Dataset Management Command: Supported 00:16:36.295 Write Zeroes Command: Supported 00:16:36.295 Set Features Save Field: Not Supported 00:16:36.295 Reservations: Not Supported 00:16:36.295 Timestamp: Not Supported 00:16:36.295 Copy: Supported 00:16:36.295 Volatile Write Cache: Present 00:16:36.295 Atomic Write Unit (Normal): 1 00:16:36.295 Atomic Write Unit (PFail): 1 00:16:36.295 Atomic Compare & Write Unit: 1 00:16:36.295 Fused Compare & Write: Supported 00:16:36.295 Scatter-Gather List 00:16:36.295 SGL Command Set: Supported (Dword aligned) 00:16:36.295 SGL Keyed: Not Supported 00:16:36.296 SGL Bit Bucket Descriptor: Not Supported 00:16:36.296 SGL Metadata Pointer: Not Supported 00:16:36.296 Oversized SGL: Not Supported 00:16:36.296 SGL Metadata Address: Not Supported 00:16:36.296 SGL Offset: Not Supported 00:16:36.296 Transport SGL Data Block: Not Supported 00:16:36.296 Replay Protected Memory Block: Not Supported 00:16:36.296 00:16:36.296 Firmware Slot Information 00:16:36.296 ========================= 00:16:36.296 Active slot: 1 00:16:36.296 Slot 1 Firmware Revision: 24.01.1 00:16:36.296 00:16:36.296 00:16:36.296 Commands Supported and Effects 00:16:36.296 ============================== 00:16:36.296 Admin Commands 00:16:36.296 -------------- 00:16:36.296 Get Log Page (02h): Supported 00:16:36.296 Identify (06h): Supported 00:16:36.296 Abort (08h): Supported 00:16:36.296 Set Features (09h): Supported 00:16:36.296 Get Features (0Ah): Supported 00:16:36.296 Asynchronous Event Request (0Ch): Supported 00:16:36.296 Keep Alive (18h): Supported 00:16:36.296 I/O Commands 00:16:36.296 ------------ 00:16:36.296 Flush (00h): Supported LBA-Change 00:16:36.296 Write (01h): Supported LBA-Change 00:16:36.296 Read (02h): Supported 00:16:36.296 Compare (05h): Supported 00:16:36.296 Write Zeroes (08h): Supported LBA-Change 00:16:36.296 Dataset Management (09h): Supported LBA-Change 00:16:36.296 Copy (19h): Supported LBA-Change 00:16:36.296 Unknown (79h): Supported LBA-Change 00:16:36.296 Unknown (7Ah): Supported 00:16:36.296 00:16:36.296 Error Log 00:16:36.296 ========= 00:16:36.296 00:16:36.296 Arbitration 00:16:36.296 =========== 00:16:36.296 Arbitration Burst: 1 00:16:36.296 00:16:36.296 Power Management 00:16:36.296 ================ 00:16:36.296 Number of Power States: 1 00:16:36.296 Current Power State: Power State #0 00:16:36.296 Power State #0: 00:16:36.296 Max Power: 0.00 W 00:16:36.296 Non-Operational State: Operational 00:16:36.296 Entry Latency: Not Reported 00:16:36.296 Exit Latency: Not Reported 00:16:36.296 Relative Read Throughput: 0 00:16:36.296 Relative Read Latency: 0 00:16:36.296 Relative Write Throughput: 0 00:16:36.296 Relative Write Latency: 0 00:16:36.296 Idle Power: Not Reported 00:16:36.296 Active Power: Not Reported 00:16:36.296 Non-Operational Permissive Mode: Not Supported 00:16:36.296 00:16:36.296 Health Information 00:16:36.296 ================== 00:16:36.296 Critical Warnings: 00:16:36.296 Available Spare Space: OK 00:16:36.296 Temperature: OK 00:16:36.296 Device Reliability: OK 00:16:36.296 Read Only: No 00:16:36.296 Volatile Memory Backup: OK 00:16:36.296 Current Temperature: 0 Kelvin[2024-06-07 23:12:58.951381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:36.296 [2024-06-07 23:12:58.959249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:36.296 [2024-06-07 23:12:58.959279] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:36.296 [2024-06-07 23:12:58.959288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.296 [2024-06-07 23:12:58.959294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.296 [2024-06-07 23:12:58.959300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.296 [2024-06-07 23:12:58.959306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.296 [2024-06-07 23:12:58.959356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:36.296 [2024-06-07 23:12:58.959366] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:36.296 [2024-06-07 23:12:58.960391] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:36.296 [2024-06-07 23:12:58.960402] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:36.296 [2024-06-07 23:12:58.961370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:36.296 [2024-06-07 23:12:58.961381] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:36.296 [2024-06-07 23:12:58.961429] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:36.296 [2024-06-07 23:12:58.962801] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:36.557 (-273 Celsius) 00:16:36.557 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:36.557 Available Spare: 0% 00:16:36.557 Available Spare Threshold: 0% 00:16:36.557 Life Percentage Used: 0% 00:16:36.557 Data Units Read: 0 00:16:36.557 Data Units Written: 0 00:16:36.557 Host Read Commands: 0 00:16:36.557 Host Write Commands: 0 00:16:36.557 Controller Busy Time: 0 minutes 00:16:36.557 Power Cycles: 0 00:16:36.557 Power On Hours: 0 hours 00:16:36.557 Unsafe Shutdowns: 0 00:16:36.557 Unrecoverable Media Errors: 0 00:16:36.558 Lifetime Error Log Entries: 0 00:16:36.558 Warning Temperature Time: 0 minutes 00:16:36.558 Critical Temperature Time: 0 minutes 00:16:36.558 00:16:36.558 Number of Queues 00:16:36.558 ================ 00:16:36.558 Number of I/O Submission Queues: 127 00:16:36.558 Number of I/O Completion Queues: 127 00:16:36.558 00:16:36.558 Active Namespaces 00:16:36.558 ================= 00:16:36.558 Namespace ID:1 00:16:36.558 Error Recovery Timeout: Unlimited 00:16:36.558 Command Set Identifier: NVM (00h) 00:16:36.558 Deallocate: Supported 00:16:36.558 Deallocated/Unwritten Error: Not Supported 00:16:36.558 Deallocated Read Value: Unknown 00:16:36.558 Deallocate in Write Zeroes: Not Supported 00:16:36.558 Deallocated Guard Field: 0xFFFF 00:16:36.558 Flush: Supported 00:16:36.558 Reservation: Supported 00:16:36.558 Namespace Sharing Capabilities: Multiple Controllers 00:16:36.558 Size (in LBAs): 131072 (0GiB) 00:16:36.558 Capacity (in LBAs): 131072 (0GiB) 00:16:36.558 Utilization (in LBAs): 131072 (0GiB) 00:16:36.558 NGUID: D2EDFAF245CC4CE9ABAC997CB0BF7308 00:16:36.558 UUID: d2edfaf2-45cc-4ce9-abac-997cb0bf7308 00:16:36.558 Thin Provisioning: Not Supported 00:16:36.558 Per-NS Atomic Units: Yes 00:16:36.558 Atomic Boundary Size (Normal): 0 00:16:36.558 Atomic Boundary Size (PFail): 0 00:16:36.558 Atomic Boundary Offset: 0 00:16:36.558 Maximum Single Source Range Length: 65535 00:16:36.558 Maximum Copy Length: 65535 00:16:36.558 Maximum Source Range Count: 1 00:16:36.558 NGUID/EUI64 Never Reused: No 00:16:36.558 Namespace Write Protected: No 00:16:36.558 Number of LBA Formats: 1 00:16:36.558 Current LBA Format: LBA Format #00 00:16:36.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:36.558 00:16:36.558 23:12:59 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:36.558 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.848 Initializing NVMe Controllers 00:16:41.848 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:41.848 Initialization complete. Launching workers. 00:16:41.848 ======================================================== 00:16:41.848 Latency(us) 00:16:41.848 Device Information : IOPS MiB/s Average min max 00:16:41.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.80 156.09 3205.78 834.84 6840.38 00:16:41.848 ======================================================== 00:16:41.848 Total : 39957.80 156.09 3205.78 834.84 6840.38 00:16:41.848 00:16:41.848 23:13:04 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:41.848 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.141 Initializing NVMe Controllers 00:16:47.141 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:47.141 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:47.141 Initialization complete. Launching workers. 00:16:47.141 ======================================================== 00:16:47.141 Latency(us) 00:16:47.141 Device Information : IOPS MiB/s Average min max 00:16:47.141 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37839.57 147.81 3382.39 1083.35 11861.84 00:16:47.141 ======================================================== 00:16:47.141 Total : 37839.57 147.81 3382.39 1083.35 11861.84 00:16:47.141 00:16:47.141 23:13:09 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:47.141 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.512 Initializing NVMe Controllers 00:16:52.512 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.512 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:52.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:52.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:52.512 Initialization complete. Launching workers. 00:16:52.512 Starting thread on core 2 00:16:52.512 Starting thread on core 3 00:16:52.512 Starting thread on core 1 00:16:52.512 23:13:14 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:52.512 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.810 Initializing NVMe Controllers 00:16:55.810 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.810 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:55.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:55.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:55.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:55.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:55.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:55.810 Initialization complete. Launching workers. 00:16:55.810 Starting thread on core 1 with urgent priority queue 00:16:55.810 Starting thread on core 2 with urgent priority queue 00:16:55.810 Starting thread on core 3 with urgent priority queue 00:16:55.810 Starting thread on core 0 with urgent priority queue 00:16:55.810 SPDK bdev Controller (SPDK2 ) core 0: 12846.33 IO/s 7.78 secs/100000 ios 00:16:55.810 SPDK bdev Controller (SPDK2 ) core 1: 9920.00 IO/s 10.08 secs/100000 ios 00:16:55.810 SPDK bdev Controller (SPDK2 ) core 2: 12214.67 IO/s 8.19 secs/100000 ios 00:16:55.810 SPDK bdev Controller (SPDK2 ) core 3: 6046.67 IO/s 16.54 secs/100000 ios 00:16:55.810 ======================================================== 00:16:55.810 00:16:55.810 23:13:18 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.810 Initializing NVMe Controllers 00:16:55.810 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.810 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:55.810 Namespace ID: 1 size: 0GB 00:16:55.810 Initialization complete. 00:16:55.810 INFO: using host memory buffer for IO 00:16:55.810 Hello world! 00:16:55.810 23:13:18 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.192 Initializing NVMe Controllers 00:16:57.192 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.192 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.192 Initialization complete. Launching workers. 00:16:57.192 submit (in ns) avg, min, max = 7316.0, 3845.0, 5995689.2 00:16:57.192 complete (in ns) avg, min, max = 17318.6, 2352.5, 6990240.0 00:16:57.192 00:16:57.192 Submit histogram 00:16:57.192 ================ 00:16:57.192 Range in us Cumulative Count 00:16:57.192 3.840 - 3.867: 1.0711% ( 207) 00:16:57.192 3.867 - 3.893: 5.0657% ( 772) 00:16:57.192 3.893 - 3.920: 12.6927% ( 1474) 00:16:57.192 3.920 - 3.947: 23.3727% ( 2064) 00:16:57.192 3.947 - 3.973: 36.0706% ( 2454) 00:16:57.192 3.973 - 4.000: 49.2445% ( 2546) 00:16:57.192 4.000 - 4.027: 66.4183% ( 3319) 00:16:57.192 4.027 - 4.053: 81.8224% ( 2977) 00:16:57.192 4.053 - 4.080: 91.9176% ( 1951) 00:16:57.192 4.080 - 4.107: 97.0454% ( 991) 00:16:57.192 4.107 - 4.133: 98.7478% ( 329) 00:16:57.192 4.133 - 4.160: 99.2342% ( 94) 00:16:57.192 4.160 - 4.187: 99.4256% ( 37) 00:16:57.192 4.187 - 4.213: 99.4670% ( 8) 00:16:57.192 4.213 - 4.240: 99.4774% ( 2) 00:16:57.192 4.240 - 4.267: 99.4826% ( 1) 00:16:57.192 4.320 - 4.347: 99.4877% ( 1) 00:16:57.192 4.347 - 4.373: 99.4929% ( 1) 00:16:57.192 4.373 - 4.400: 99.4981% ( 1) 00:16:57.192 4.453 - 4.480: 99.5033% ( 1) 00:16:57.192 4.587 - 4.613: 99.5084% ( 1) 00:16:57.192 4.747 - 4.773: 99.5136% ( 1) 00:16:57.192 4.880 - 4.907: 99.5188% ( 1) 00:16:57.192 4.987 - 5.013: 99.5240% ( 1) 00:16:57.192 5.040 - 5.067: 99.5291% ( 1) 00:16:57.192 5.093 - 5.120: 99.5395% ( 2) 00:16:57.192 5.120 - 5.147: 99.5447% ( 1) 00:16:57.192 5.280 - 5.307: 99.5498% ( 1) 00:16:57.192 5.520 - 5.547: 99.5550% ( 1) 00:16:57.192 5.547 - 5.573: 99.5602% ( 1) 00:16:57.192 5.787 - 5.813: 99.5654% ( 1) 00:16:57.192 5.867 - 5.893: 99.5705% ( 1) 00:16:57.192 5.920 - 5.947: 99.5757% ( 1) 00:16:57.192 5.947 - 5.973: 99.5809% ( 1) 00:16:57.192 5.973 - 6.000: 99.5860% ( 1) 00:16:57.192 6.000 - 6.027: 99.5912% ( 1) 00:16:57.192 6.053 - 6.080: 99.5964% ( 1) 00:16:57.192 6.080 - 6.107: 99.6067% ( 2) 00:16:57.192 6.133 - 6.160: 99.6171% ( 2) 00:16:57.192 6.160 - 6.187: 99.6223% ( 1) 00:16:57.192 6.187 - 6.213: 99.6326% ( 2) 00:16:57.192 6.213 - 6.240: 99.6533% ( 4) 00:16:57.192 6.240 - 6.267: 99.6585% ( 1) 00:16:57.192 6.267 - 6.293: 99.6637% ( 1) 00:16:57.192 6.320 - 6.347: 99.6688% ( 1) 00:16:57.192 6.347 - 6.373: 99.6792% ( 2) 00:16:57.193 6.400 - 6.427: 99.6844% ( 1) 00:16:57.193 6.453 - 6.480: 99.6947% ( 2) 00:16:57.193 6.480 - 6.507: 99.7051% ( 2) 00:16:57.193 6.507 - 6.533: 99.7102% ( 1) 00:16:57.193 6.560 - 6.587: 99.7154% ( 1) 00:16:57.193 6.880 - 6.933: 99.7206% ( 1) 00:16:57.193 7.093 - 7.147: 99.7258% ( 1) 00:16:57.193 7.200 - 7.253: 99.7309% ( 1) 00:16:57.193 7.253 - 7.307: 99.7465% ( 3) 00:16:57.193 7.360 - 7.413: 99.7516% ( 1) 00:16:57.193 7.413 - 7.467: 99.7568% ( 1) 00:16:57.193 7.520 - 7.573: 99.7620% ( 1) 00:16:57.193 7.573 - 7.627: 99.7723% ( 2) 00:16:57.193 7.627 - 7.680: 99.7775% ( 1) 00:16:57.193 7.680 - 7.733: 99.7827% ( 1) 00:16:57.193 7.787 - 7.840: 99.8034% ( 4) 00:16:57.193 7.840 - 7.893: 99.8137% ( 2) 00:16:57.193 7.893 - 7.947: 99.8344% ( 4) 00:16:57.193 7.947 - 8.000: 99.8396% ( 1) 00:16:57.193 8.000 - 8.053: 99.8448% ( 1) 00:16:57.193 8.053 - 8.107: 99.8551% ( 2) 00:16:57.193 8.160 - 8.213: 99.8655% ( 2) 00:16:57.193 8.267 - 8.320: 99.8758% ( 2) 00:16:57.193 8.320 - 8.373: 99.8810% ( 1) 00:16:57.193 8.427 - 8.480: 99.8862% ( 1) 00:16:57.193 8.693 - 8.747: 99.8965% ( 2) 00:16:57.193 8.800 - 8.853: 99.9017% ( 1) 00:16:57.193 9.067 - 9.120: 99.9069% ( 1) 00:16:57.193 9.547 - 9.600: 99.9120% ( 1) 00:16:57.193 11.893 - 11.947: 99.9172% ( 1) 00:16:57.193 2007.040 - 2020.693: 99.9224% ( 1) 00:16:57.193 3986.773 - 4014.080: 99.9948% ( 14) 00:16:57.193 5980.160 - 6007.467: 100.0000% ( 1) 00:16:57.193 00:16:57.193 Complete histogram 00:16:57.193 ================== 00:16:57.193 Range in us Cumulative Count 00:16:57.193 2.347 - 2.360: 0.0052% ( 1) 00:16:57.193 2.360 - 2.373: 0.1708% ( 32) 00:16:57.193 2.373 - 2.387: 2.4061% ( 432) 00:16:57.193 2.387 - 2.400: 2.5975% ( 37) 00:16:57.193 2.400 - 2.413: 5.8367% ( 626) 00:16:57.193 2.413 - 2.427: 57.7874% ( 10040) 00:16:57.193 2.427 - 2.440: 66.8219% ( 1746) 00:16:57.193 2.440 - 2.453: 75.2561% ( 1630) 00:16:57.193 2.453 - 2.467: 80.3374% ( 982) 00:16:57.193 2.467 - 2.480: 82.0035% ( 322) 00:16:57.193 2.480 - 2.493: 86.1689% ( 805) 00:16:57.193 2.493 - 2.507: 92.4195% ( 1208) 00:16:57.193 2.507 - 2.520: 96.2693% ( 744) 00:16:57.193 2.520 - 2.533: 97.9613% ( 327) 00:16:57.193 2.533 - 2.547: 98.8979% ( 181) 00:16:57.193 2.547 - 2.560: 99.2290% ( 64) 00:16:57.193 2.560 - 2.573: 99.2859% ( 11) 00:16:57.193 2.573 - 2.587: 99.3273% ( 8) 00:16:57.193 2.587 - 2.600: 99.3325% ( 1) 00:16:57.193 4.373 - 4.400: 99.3377% ( 1) 00:16:57.193 4.400 - 4.427: 99.3429% ( 1) 00:16:57.193 4.480 - 4.507: 99.3480% ( 1) 00:16:57.193 4.507 - 4.533: 99.3532% ( 1) 00:16:57.193 4.533 - 4.560: 99.3636% ( 2) 00:16:57.193 4.587 - 4.613: 99.3739% ( 2) 00:16:57.193 4.613 - 4.640: 99.3791% ( 1) 00:16:57.193 4.640 - 4.667: 99.3946% ( 3) 00:16:57.193 4.667 - 4.693: 99.4049% ( 2) 00:16:57.193 4.693 - 4.720: 99.4153% ( 2) 00:16:57.193 4.747 - 4.773: 99.4360% ( 4) 00:16:57.193 4.800 - 4.827: 99.4412% ( 1) 00:16:57.193 5.013 - 5.040: 99.4463% ( 1) 00:16:57.193 5.253 - 5.280: 99.4515% ( 1) 00:16:57.193 5.360 - 5.387: 99.4567% ( 1) 00:16:57.193 5.387 - 5.413: 99.4619% ( 1) 00:16:57.193 5.600 - 5.627: 99.4670% ( 1) 00:16:57.193 5.680 - 5.707: 99.4722% ( 1) 00:16:57.193 5.707 - 5.733: 99.4774% ( 1) 00:16:57.193 5.813 - 5.840: 99.4929% ( 3) 00:16:57.193 5.840 - 5.867: 99.5033% ( 2) 00:16:57.193 5.867 - 5.893: 99.5084% ( 1) 00:16:57.193 5.893 - 5.920: 99.5136% ( 1) 00:16:57.193 5.947 - 5.973: 99.5188% ( 1) 00:16:57.193 5.973 - 6.000: 99.5240% ( 1) 00:16:57.193 6.027 - 6.053: 99.5291% ( 1) 00:16:57.193 6.133 - 6.160: 99.5395% ( 2) 00:16:57.193 6.160 - 6.187: 99.5498% ( 2) 00:16:57.193 6.453 - 6.480: 99.5550% ( 1) 00:16:57.193 6.480 - 6.507: 99.5602% ( 1) 00:16:57.193 6.507 - 6.533: 99.5654% ( 1) 00:16:57.193 6.613 - 6.640: 99.5757% ( 2) 00:16:57.193 6.720 - 6.747: 99.5809% ( 1) 00:16:57.193 6.827 - 6.880: 99.5860% ( 1) 00:16:57.193 6.880 - 6.933: 99.5912% ( 1) 00:16:57.193 6.933 - 6.987: 99.5964% ( 1) 00:16:57.193 7.093 - 7.147: 99.6016% ( 1) 00:16:57.193 7.307 - 7.360: 99.6067% ( 1) 00:16:57.193 7.733 - 7.787: 99.6119% ( 1) 00:16:57.193 11.147 - 11.200: 99.6171% ( 1) 00:16:57.193 44.587 - 44.800: 99.6223% ( 1) 00:16:57.193 1010.347 - 1017.173: 99.6274% ( 1) 00:16:57.193 1024.000 - 1030.827: 99.6326% ( 1) 00:16:57.193 1993.387 - 2007.040: 99.6378% ( 1) 00:16:57.193 2007.040 - 2020.693: 99.6430% ( 1) 00:16:57.193 2034.347 - 2048.000: 99.6481% ( 1) 00:16:57.193 2088.960 - 2102.613: 99.6533% ( 1) 00:16:57.193 3522.560 - 3549.867: 99.6585% ( 1) 00:16:57.193 3986.773 - 4014.080: 99.9793% ( 62) 00:16:57.193 5980.160 - 6007.467: 99.9897% ( 2) 00:16:57.193 6963.200 - 6990.507: 100.0000% ( 2) 00:16:57.193 00:16:57.193 23:13:19 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:57.193 23:13:19 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:57.193 23:13:19 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:57.193 23:13:19 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:57.193 23:13:19 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:57.454 [ 00:16:57.454 { 00:16:57.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.454 "subtype": "Discovery", 00:16:57.454 "listen_addresses": [], 00:16:57.454 "allow_any_host": true, 00:16:57.454 "hosts": [] 00:16:57.454 }, 00:16:57.454 { 00:16:57.454 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:57.454 "subtype": "NVMe", 00:16:57.454 "listen_addresses": [ 00:16:57.454 { 00:16:57.454 "transport": "VFIOUSER", 00:16:57.454 "trtype": "VFIOUSER", 00:16:57.454 "adrfam": "IPv4", 00:16:57.454 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:57.454 "trsvcid": "0" 00:16:57.454 } 00:16:57.454 ], 00:16:57.454 "allow_any_host": true, 00:16:57.454 "hosts": [], 00:16:57.454 "serial_number": "SPDK1", 00:16:57.454 "model_number": "SPDK bdev Controller", 00:16:57.454 "max_namespaces": 32, 00:16:57.454 "min_cntlid": 1, 00:16:57.454 "max_cntlid": 65519, 00:16:57.454 "namespaces": [ 00:16:57.454 { 00:16:57.454 "nsid": 1, 00:16:57.454 "bdev_name": "Malloc1", 00:16:57.454 "name": "Malloc1", 00:16:57.454 "nguid": "F6F106BD1AE64ADEA3772E6C34F654AC", 00:16:57.454 "uuid": "f6f106bd-1ae6-4ade-a377-2e6c34f654ac" 00:16:57.454 }, 00:16:57.454 { 00:16:57.454 "nsid": 2, 00:16:57.454 "bdev_name": "Malloc3", 00:16:57.454 "name": "Malloc3", 00:16:57.454 "nguid": "E4ED2B49D78045D8812846D931DC8AA5", 00:16:57.454 "uuid": "e4ed2b49-d780-45d8-8128-46d931dc8aa5" 00:16:57.454 } 00:16:57.454 ] 00:16:57.454 }, 00:16:57.454 { 00:16:57.454 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:57.454 "subtype": "NVMe", 00:16:57.454 "listen_addresses": [ 00:16:57.454 { 00:16:57.454 "transport": "VFIOUSER", 00:16:57.454 "trtype": "VFIOUSER", 00:16:57.454 "adrfam": "IPv4", 00:16:57.454 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:57.454 "trsvcid": "0" 00:16:57.454 } 00:16:57.454 ], 00:16:57.454 "allow_any_host": true, 00:16:57.454 "hosts": [], 00:16:57.454 "serial_number": "SPDK2", 00:16:57.454 "model_number": "SPDK bdev Controller", 00:16:57.454 "max_namespaces": 32, 00:16:57.454 "min_cntlid": 1, 00:16:57.454 "max_cntlid": 65519, 00:16:57.454 "namespaces": [ 00:16:57.454 { 00:16:57.454 "nsid": 1, 00:16:57.454 "bdev_name": "Malloc2", 00:16:57.454 "name": "Malloc2", 00:16:57.454 "nguid": "D2EDFAF245CC4CE9ABAC997CB0BF7308", 00:16:57.454 "uuid": "d2edfaf2-45cc-4ce9-abac-997cb0bf7308" 00:16:57.454 } 00:16:57.454 ] 00:16:57.454 } 00:16:57.454 ] 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2786611 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:57.454 23:13:19 -- common/autotest_common.sh@1244 -- # local i=0 00:16:57.454 23:13:19 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.454 23:13:19 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:57.454 23:13:19 -- common/autotest_common.sh@1255 -- # return 0 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:57.454 23:13:19 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:57.454 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.454 Malloc4 00:16:57.454 23:13:20 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:57.714 23:13:20 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:57.714 Asynchronous Event Request test 00:16:57.714 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.714 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:57.714 Registering asynchronous event callbacks... 00:16:57.714 Starting namespace attribute notice tests for all controllers... 00:16:57.714 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:57.714 aer_cb - Changed Namespace 00:16:57.714 Cleaning up... 00:16:57.975 [ 00:16:57.975 { 00:16:57.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.975 "subtype": "Discovery", 00:16:57.975 "listen_addresses": [], 00:16:57.975 "allow_any_host": true, 00:16:57.975 "hosts": [] 00:16:57.975 }, 00:16:57.975 { 00:16:57.975 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:57.975 "subtype": "NVMe", 00:16:57.975 "listen_addresses": [ 00:16:57.975 { 00:16:57.975 "transport": "VFIOUSER", 00:16:57.975 "trtype": "VFIOUSER", 00:16:57.975 "adrfam": "IPv4", 00:16:57.975 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:57.975 "trsvcid": "0" 00:16:57.975 } 00:16:57.975 ], 00:16:57.975 "allow_any_host": true, 00:16:57.975 "hosts": [], 00:16:57.975 "serial_number": "SPDK1", 00:16:57.975 "model_number": "SPDK bdev Controller", 00:16:57.975 "max_namespaces": 32, 00:16:57.975 "min_cntlid": 1, 00:16:57.975 "max_cntlid": 65519, 00:16:57.975 "namespaces": [ 00:16:57.975 { 00:16:57.975 "nsid": 1, 00:16:57.975 "bdev_name": "Malloc1", 00:16:57.975 "name": "Malloc1", 00:16:57.975 "nguid": "F6F106BD1AE64ADEA3772E6C34F654AC", 00:16:57.975 "uuid": "f6f106bd-1ae6-4ade-a377-2e6c34f654ac" 00:16:57.975 }, 00:16:57.975 { 00:16:57.975 "nsid": 2, 00:16:57.975 "bdev_name": "Malloc3", 00:16:57.975 "name": "Malloc3", 00:16:57.975 "nguid": "E4ED2B49D78045D8812846D931DC8AA5", 00:16:57.975 "uuid": "e4ed2b49-d780-45d8-8128-46d931dc8aa5" 00:16:57.975 } 00:16:57.975 ] 00:16:57.975 }, 00:16:57.975 { 00:16:57.975 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:57.975 "subtype": "NVMe", 00:16:57.975 "listen_addresses": [ 00:16:57.975 { 00:16:57.975 "transport": "VFIOUSER", 00:16:57.975 "trtype": "VFIOUSER", 00:16:57.975 "adrfam": "IPv4", 00:16:57.975 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:57.975 "trsvcid": "0" 00:16:57.975 } 00:16:57.975 ], 00:16:57.975 "allow_any_host": true, 00:16:57.975 "hosts": [], 00:16:57.975 "serial_number": "SPDK2", 00:16:57.975 "model_number": "SPDK bdev Controller", 00:16:57.975 "max_namespaces": 32, 00:16:57.975 "min_cntlid": 1, 00:16:57.975 "max_cntlid": 65519, 00:16:57.975 "namespaces": [ 00:16:57.975 { 00:16:57.975 "nsid": 1, 00:16:57.975 "bdev_name": "Malloc2", 00:16:57.975 "name": "Malloc2", 00:16:57.975 "nguid": "D2EDFAF245CC4CE9ABAC997CB0BF7308", 00:16:57.975 "uuid": "d2edfaf2-45cc-4ce9-abac-997cb0bf7308" 00:16:57.975 }, 00:16:57.975 { 00:16:57.975 "nsid": 2, 00:16:57.975 "bdev_name": "Malloc4", 00:16:57.975 "name": "Malloc4", 00:16:57.975 "nguid": "7CA0A6905F0346AE9F38EF05167D21B6", 00:16:57.975 "uuid": "7ca0a690-5f03-46ae-9f38-ef05167d21b6" 00:16:57.975 } 00:16:57.975 ] 00:16:57.975 } 00:16:57.975 ] 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@44 -- # wait 2786611 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2777514 00:16:57.975 23:13:20 -- common/autotest_common.sh@926 -- # '[' -z 2777514 ']' 00:16:57.975 23:13:20 -- common/autotest_common.sh@930 -- # kill -0 2777514 00:16:57.975 23:13:20 -- common/autotest_common.sh@931 -- # uname 00:16:57.975 23:13:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.975 23:13:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2777514 00:16:57.975 23:13:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:57.975 23:13:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:57.975 23:13:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2777514' 00:16:57.975 killing process with pid 2777514 00:16:57.975 23:13:20 -- common/autotest_common.sh@945 -- # kill 2777514 00:16:57.975 [2024-06-07 23:13:20.489430] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:57.975 23:13:20 -- common/autotest_common.sh@950 -- # wait 2777514 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:57.975 23:13:20 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:58.236 23:13:20 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2786723 00:16:58.236 23:13:20 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:58.236 23:13:20 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2786723' 00:16:58.236 Process pid: 2786723 00:16:58.236 23:13:20 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:58.236 23:13:20 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2786723 00:16:58.236 23:13:20 -- common/autotest_common.sh@819 -- # '[' -z 2786723 ']' 00:16:58.236 23:13:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.236 23:13:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.236 23:13:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.236 23:13:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.236 23:13:20 -- common/autotest_common.sh@10 -- # set +x 00:16:58.236 [2024-06-07 23:13:20.684672] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:58.236 [2024-06-07 23:13:20.685596] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:58.236 [2024-06-07 23:13:20.685638] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.236 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.236 [2024-06-07 23:13:20.744736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.236 [2024-06-07 23:13:20.773623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:58.236 [2024-06-07 23:13:20.773753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.236 [2024-06-07 23:13:20.773763] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.236 [2024-06-07 23:13:20.773771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.236 [2024-06-07 23:13:20.773912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.236 [2024-06-07 23:13:20.774011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.236 [2024-06-07 23:13:20.774168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.236 [2024-06-07 23:13:20.774169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.236 [2024-06-07 23:13:20.828701] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:58.236 [2024-06-07 23:13:20.828838] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:58.236 [2024-06-07 23:13:20.829141] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:58.236 [2024-06-07 23:13:20.829335] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:58.236 [2024-06-07 23:13:20.829415] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:58.869 23:13:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:58.869 23:13:21 -- common/autotest_common.sh@852 -- # return 0 00:16:58.869 23:13:21 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:59.807 23:13:22 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:00.067 23:13:22 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:00.067 23:13:22 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:00.067 23:13:22 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.067 23:13:22 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:00.067 23:13:22 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:00.328 Malloc1 00:17:00.328 23:13:22 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:00.328 23:13:22 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:00.588 23:13:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:00.847 23:13:23 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.847 23:13:23 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:00.847 23:13:23 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:00.847 Malloc2 00:17:00.847 23:13:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:01.106 23:13:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:01.366 23:13:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:01.366 23:13:23 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:01.366 23:13:23 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2786723 00:17:01.366 23:13:23 -- common/autotest_common.sh@926 -- # '[' -z 2786723 ']' 00:17:01.366 23:13:23 -- common/autotest_common.sh@930 -- # kill -0 2786723 00:17:01.366 23:13:23 -- common/autotest_common.sh@931 -- # uname 00:17:01.366 23:13:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.366 23:13:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2786723 00:17:01.366 23:13:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:01.366 23:13:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:01.366 23:13:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2786723' 00:17:01.366 killing process with pid 2786723 00:17:01.366 23:13:24 -- common/autotest_common.sh@945 -- # kill 2786723 00:17:01.366 23:13:24 -- common/autotest_common.sh@950 -- # wait 2786723 00:17:01.627 23:13:24 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:01.627 23:13:24 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:01.627 00:17:01.627 real 0m50.235s 00:17:01.627 user 3m19.594s 00:17:01.627 sys 0m2.900s 00:17:01.627 23:13:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.627 23:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:01.627 ************************************ 00:17:01.627 END TEST nvmf_vfio_user 00:17:01.627 ************************************ 00:17:01.627 23:13:24 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:01.627 23:13:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:01.627 23:13:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.627 23:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:01.627 ************************************ 00:17:01.627 START TEST nvmf_vfio_user_nvme_compliance 00:17:01.627 ************************************ 00:17:01.627 23:13:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:01.627 * Looking for test storage... 00:17:01.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:01.627 23:13:24 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.627 23:13:24 -- nvmf/common.sh@7 -- # uname -s 00:17:01.627 23:13:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.627 23:13:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.627 23:13:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.627 23:13:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.627 23:13:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.627 23:13:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.627 23:13:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.627 23:13:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.627 23:13:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.627 23:13:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.627 23:13:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.627 23:13:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.627 23:13:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.627 23:13:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.627 23:13:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.627 23:13:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.627 23:13:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.627 23:13:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.627 23:13:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.627 23:13:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.627 23:13:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.627 23:13:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.887 23:13:24 -- paths/export.sh@5 -- # export PATH 00:17:01.887 23:13:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.887 23:13:24 -- nvmf/common.sh@46 -- # : 0 00:17:01.887 23:13:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.887 23:13:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.887 23:13:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.887 23:13:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.887 23:13:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.887 23:13:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.887 23:13:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.887 23:13:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.887 23:13:24 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.887 23:13:24 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.887 23:13:24 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:01.887 23:13:24 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:01.887 23:13:24 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:01.887 23:13:24 -- compliance/compliance.sh@20 -- # nvmfpid=2787474 00:17:01.887 23:13:24 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2787474' 00:17:01.887 Process pid: 2787474 00:17:01.887 23:13:24 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:01.887 23:13:24 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:01.887 23:13:24 -- compliance/compliance.sh@24 -- # waitforlisten 2787474 00:17:01.887 23:13:24 -- common/autotest_common.sh@819 -- # '[' -z 2787474 ']' 00:17:01.887 23:13:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.887 23:13:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.887 23:13:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.887 23:13:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.887 23:13:24 -- common/autotest_common.sh@10 -- # set +x 00:17:01.887 [2024-06-07 23:13:24.371185] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:01.887 [2024-06-07 23:13:24.371270] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.887 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.887 [2024-06-07 23:13:24.437162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.887 [2024-06-07 23:13:24.474707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.887 [2024-06-07 23:13:24.474860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.887 [2024-06-07 23:13:24.474870] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.887 [2024-06-07 23:13:24.474879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.887 [2024-06-07 23:13:24.475028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.887 [2024-06-07 23:13:24.475167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.887 [2024-06-07 23:13:24.475170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.828 23:13:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.828 23:13:25 -- common/autotest_common.sh@852 -- # return 0 00:17:02.828 23:13:25 -- compliance/compliance.sh@26 -- # sleep 1 00:17:03.769 23:13:26 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:03.769 23:13:26 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:03.769 23:13:26 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:03.769 23:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.769 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 23:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.769 23:13:26 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:03.769 23:13:26 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:03.769 23:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.769 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 malloc0 00:17:03.769 23:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.769 23:13:26 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:03.769 23:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.769 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 23:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.769 23:13:26 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:03.769 23:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.769 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 23:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.769 23:13:26 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:03.769 23:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:03.769 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:17:03.769 23:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.769 23:13:26 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:03.769 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.769 00:17:03.769 00:17:03.769 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.769 http://cunit.sourceforge.net/ 00:17:03.769 00:17:03.769 00:17:03.769 Suite: nvme_compliance 00:17:03.769 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-07 23:13:26.395374] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:03.769 [2024-06-07 23:13:26.395397] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:03.769 [2024-06-07 23:13:26.395402] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:03.769 passed 00:17:04.030 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:04.030 Test: admin_identify_ns ...[2024-06-07 23:13:26.649256] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:04.030 [2024-06-07 23:13:26.657253] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:04.290 passed 00:17:04.290 Test: admin_get_features_mandatory_features ...passed 00:17:04.290 Test: admin_get_features_optional_features ...passed 00:17:04.550 Test: admin_set_features_number_of_queues ...passed 00:17:04.550 Test: admin_get_log_page_mandatory_logs ...passed 00:17:04.810 Test: admin_get_log_page_with_lpo ...[2024-06-07 23:13:27.327252] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:04.810 passed 00:17:04.810 Test: fabric_property_get ...passed 00:17:05.070 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-07 23:13:27.532144] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:05.070 passed 00:17:05.070 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-07 23:13:27.710249] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:05.070 [2024-06-07 23:13:27.726248] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:05.331 passed 00:17:05.331 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-07 23:13:27.826198] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:05.331 passed 00:17:05.331 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-07 23:13:27.996249] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:05.592 [2024-06-07 23:13:28.020248] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:05.592 passed 00:17:05.592 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-07 23:13:28.119398] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:05.592 [2024-06-07 23:13:28.119424] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:05.592 passed 00:17:05.853 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-07 23:13:28.307262] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:05.853 [2024-06-07 23:13:28.315251] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:05.853 [2024-06-07 23:13:28.323247] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:05.853 [2024-06-07 23:13:28.331253] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:05.853 passed 00:17:05.853 Test: admin_create_io_sq_verify_pc ...[2024-06-07 23:13:28.468258] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:05.853 passed 00:17:07.238 Test: admin_create_io_qp_max_qps ...[2024-06-07 23:13:29.689254] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:07.499 passed 00:17:07.761 Test: admin_create_io_sq_shared_cq ...[2024-06-07 23:13:30.294252] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:07.761 passed 00:17:07.761 00:17:07.761 Run Summary: Type Total Ran Passed Failed Inactive 00:17:07.761 suites 1 1 n/a 0 0 00:17:07.761 tests 18 18 18 0 0 00:17:07.761 asserts 360 360 360 0 n/a 00:17:07.761 00:17:07.761 Elapsed time = 1.652 seconds 00:17:07.761 23:13:30 -- compliance/compliance.sh@42 -- # killprocess 2787474 00:17:07.761 23:13:30 -- common/autotest_common.sh@926 -- # '[' -z 2787474 ']' 00:17:07.761 23:13:30 -- common/autotest_common.sh@930 -- # kill -0 2787474 00:17:07.761 23:13:30 -- common/autotest_common.sh@931 -- # uname 00:17:07.761 23:13:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:07.761 23:13:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2787474 00:17:07.761 23:13:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:07.761 23:13:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:07.761 23:13:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2787474' 00:17:07.761 killing process with pid 2787474 00:17:07.761 23:13:30 -- common/autotest_common.sh@945 -- # kill 2787474 00:17:07.761 23:13:30 -- common/autotest_common.sh@950 -- # wait 2787474 00:17:08.022 23:13:30 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:08.022 23:13:30 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:08.022 00:17:08.022 real 0m6.376s 00:17:08.022 user 0m18.317s 00:17:08.022 sys 0m0.482s 00:17:08.022 23:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.022 23:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:08.022 ************************************ 00:17:08.022 END TEST nvmf_vfio_user_nvme_compliance 00:17:08.022 ************************************ 00:17:08.022 23:13:30 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:08.022 23:13:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:08.022 23:13:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.022 23:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:08.022 ************************************ 00:17:08.022 START TEST nvmf_vfio_user_fuzz 00:17:08.022 ************************************ 00:17:08.022 23:13:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:08.022 * Looking for test storage... 00:17:08.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.022 23:13:30 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.022 23:13:30 -- nvmf/common.sh@7 -- # uname -s 00:17:08.022 23:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.022 23:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.284 23:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.284 23:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.284 23:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.284 23:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.284 23:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.284 23:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.284 23:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.284 23:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.284 23:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.284 23:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.284 23:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.284 23:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.284 23:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.284 23:13:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.284 23:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.284 23:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.284 23:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.284 23:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.284 23:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.284 23:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.284 23:13:30 -- paths/export.sh@5 -- # export PATH 00:17:08.284 23:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.284 23:13:30 -- nvmf/common.sh@46 -- # : 0 00:17:08.284 23:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:08.284 23:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:08.284 23:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:08.284 23:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.284 23:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.284 23:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:08.284 23:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:08.284 23:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2788884 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2788884' 00:17:08.284 Process pid: 2788884 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.284 23:13:30 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2788884 00:17:08.284 23:13:30 -- common/autotest_common.sh@819 -- # '[' -z 2788884 ']' 00:17:08.284 23:13:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.284 23:13:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.284 23:13:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.284 23:13:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.284 23:13:30 -- common/autotest_common.sh@10 -- # set +x 00:17:09.226 23:13:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.226 23:13:31 -- common/autotest_common.sh@852 -- # return 0 00:17:09.226 23:13:31 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:10.168 23:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.168 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 23:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:10.168 23:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.168 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 malloc0 00:17:10.168 23:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:10.168 23:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.168 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 23:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:10.168 23:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.168 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 23:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:10.168 23:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.168 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:17:10.168 23:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:10.168 23:13:32 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:42.278 Fuzzing completed. Shutting down the fuzz application 00:17:42.278 00:17:42.278 Dumping successful admin opcodes: 00:17:42.278 8, 9, 10, 24, 00:17:42.278 Dumping successful io opcodes: 00:17:42.278 0, 00:17:42.278 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1394488, total successful commands: 5476, random_seed: 3211919424 00:17:42.278 NS: 0x200003a1ef00 admin qp, Total commands completed: 198873, total successful commands: 1588, random_seed: 1429813248 00:17:42.278 23:14:02 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:42.278 23:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.278 23:14:02 -- common/autotest_common.sh@10 -- # set +x 00:17:42.278 23:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.278 23:14:02 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2788884 00:17:42.278 23:14:02 -- common/autotest_common.sh@926 -- # '[' -z 2788884 ']' 00:17:42.278 23:14:02 -- common/autotest_common.sh@930 -- # kill -0 2788884 00:17:42.278 23:14:02 -- common/autotest_common.sh@931 -- # uname 00:17:42.278 23:14:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.278 23:14:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2788884 00:17:42.278 23:14:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:42.278 23:14:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:42.278 23:14:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2788884' 00:17:42.278 killing process with pid 2788884 00:17:42.278 23:14:03 -- common/autotest_common.sh@945 -- # kill 2788884 00:17:42.278 23:14:03 -- common/autotest_common.sh@950 -- # wait 2788884 00:17:42.278 23:14:03 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:42.278 23:14:03 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:42.278 00:17:42.278 real 0m32.629s 00:17:42.278 user 0m38.677s 00:17:42.278 sys 0m23.035s 00:17:42.278 23:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.278 23:14:03 -- common/autotest_common.sh@10 -- # set +x 00:17:42.278 ************************************ 00:17:42.278 END TEST nvmf_vfio_user_fuzz 00:17:42.279 ************************************ 00:17:42.279 23:14:03 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:42.279 23:14:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:42.279 23:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:42.279 23:14:03 -- common/autotest_common.sh@10 -- # set +x 00:17:42.279 ************************************ 00:17:42.279 START TEST nvmf_host_management 00:17:42.279 ************************************ 00:17:42.279 23:14:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:42.279 * Looking for test storage... 00:17:42.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.279 23:14:03 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.279 23:14:03 -- nvmf/common.sh@7 -- # uname -s 00:17:42.279 23:14:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.279 23:14:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.279 23:14:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.279 23:14:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.279 23:14:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.279 23:14:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.279 23:14:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.279 23:14:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.279 23:14:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.279 23:14:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.279 23:14:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.279 23:14:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.279 23:14:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.279 23:14:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.279 23:14:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.279 23:14:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.279 23:14:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.279 23:14:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.279 23:14:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.279 23:14:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.279 23:14:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.279 23:14:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.279 23:14:03 -- paths/export.sh@5 -- # export PATH 00:17:42.279 23:14:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.279 23:14:03 -- nvmf/common.sh@46 -- # : 0 00:17:42.279 23:14:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.279 23:14:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.279 23:14:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.279 23:14:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.279 23:14:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.279 23:14:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.279 23:14:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.279 23:14:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.279 23:14:03 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.279 23:14:03 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.279 23:14:03 -- target/host_management.sh@104 -- # nvmftestinit 00:17:42.279 23:14:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.279 23:14:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.279 23:14:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.279 23:14:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.279 23:14:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.279 23:14:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.279 23:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.279 23:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.279 23:14:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:42.279 23:14:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:42.279 23:14:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:42.279 23:14:03 -- common/autotest_common.sh@10 -- # set +x 00:17:48.870 23:14:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.870 23:14:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:48.870 23:14:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:48.870 23:14:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:48.870 23:14:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:48.870 23:14:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:48.870 23:14:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:48.870 23:14:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:48.870 23:14:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:48.870 23:14:10 -- nvmf/common.sh@295 -- # e810=() 00:17:48.870 23:14:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:48.870 23:14:10 -- nvmf/common.sh@296 -- # x722=() 00:17:48.870 23:14:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:48.870 23:14:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:48.870 23:14:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:48.870 23:14:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.870 23:14:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:48.870 23:14:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:48.870 23:14:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:48.870 23:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.870 23:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:48.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:48.870 23:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.870 23:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:48.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:48.870 23:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.870 23:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:48.871 23:14:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.871 23:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.871 23:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.871 23:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.871 23:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:48.871 Found net devices under 0000:31:00.0: cvl_0_0 00:17:48.871 23:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.871 23:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.871 23:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.871 23:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.871 23:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.871 23:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:48.871 Found net devices under 0000:31:00.1: cvl_0_1 00:17:48.871 23:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.871 23:14:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:48.871 23:14:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:48.871 23:14:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:48.871 23:14:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.871 23:14:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.871 23:14:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.871 23:14:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:48.871 23:14:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.871 23:14:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.871 23:14:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:48.871 23:14:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.871 23:14:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.871 23:14:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:48.871 23:14:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:48.871 23:14:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.871 23:14:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.871 23:14:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.871 23:14:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.871 23:14:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:48.871 23:14:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.871 23:14:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.871 23:14:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.871 23:14:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:48.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:17:48.871 00:17:48.871 --- 10.0.0.2 ping statistics --- 00:17:48.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.871 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:17:48.871 23:14:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:17:48.871 00:17:48.871 --- 10.0.0.1 ping statistics --- 00:17:48.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.871 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:17:48.871 23:14:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.871 23:14:10 -- nvmf/common.sh@410 -- # return 0 00:17:48.871 23:14:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:48.871 23:14:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.871 23:14:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:48.871 23:14:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.871 23:14:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:48.871 23:14:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:48.871 23:14:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:48.871 23:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:48.871 23:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.871 23:14:10 -- common/autotest_common.sh@10 -- # set +x 00:17:48.871 ************************************ 00:17:48.871 START TEST nvmf_host_management 00:17:48.871 ************************************ 00:17:48.871 23:14:10 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:48.871 23:14:10 -- target/host_management.sh@69 -- # starttarget 00:17:48.871 23:14:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:48.871 23:14:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:48.871 23:14:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:48.871 23:14:10 -- common/autotest_common.sh@10 -- # set +x 00:17:48.871 23:14:10 -- nvmf/common.sh@469 -- # nvmfpid=2799020 00:17:48.871 23:14:10 -- nvmf/common.sh@470 -- # waitforlisten 2799020 00:17:48.871 23:14:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:48.871 23:14:10 -- common/autotest_common.sh@819 -- # '[' -z 2799020 ']' 00:17:48.871 23:14:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.871 23:14:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.871 23:14:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.871 23:14:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.871 23:14:10 -- common/autotest_common.sh@10 -- # set +x 00:17:48.871 [2024-06-07 23:14:10.856775] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:48.871 [2024-06-07 23:14:10.856834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.871 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.871 [2024-06-07 23:14:10.945338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.871 [2024-06-07 23:14:10.992367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.871 [2024-06-07 23:14:10.992525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.871 [2024-06-07 23:14:10.992536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.871 [2024-06-07 23:14:10.992543] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.871 [2024-06-07 23:14:10.992685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.871 [2024-06-07 23:14:10.992859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.871 [2024-06-07 23:14:10.993025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.871 [2024-06-07 23:14:10.993026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.131 23:14:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.131 23:14:11 -- common/autotest_common.sh@852 -- # return 0 00:17:49.131 23:14:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.131 23:14:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:49.131 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.131 23:14:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.131 23:14:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.131 23:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.131 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 [2024-06-07 23:14:11.676423] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.132 23:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.132 23:14:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:49.132 23:14:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:49.132 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 23:14:11 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:49.132 23:14:11 -- target/host_management.sh@23 -- # cat 00:17:49.132 23:14:11 -- target/host_management.sh@30 -- # rpc_cmd 00:17:49.132 23:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.132 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 Malloc0 00:17:49.132 [2024-06-07 23:14:11.735708] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.132 23:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.132 23:14:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:49.132 23:14:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:49.132 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 23:14:11 -- target/host_management.sh@73 -- # perfpid=2799239 00:17:49.132 23:14:11 -- target/host_management.sh@74 -- # waitforlisten 2799239 /var/tmp/bdevperf.sock 00:17:49.132 23:14:11 -- common/autotest_common.sh@819 -- # '[' -z 2799239 ']' 00:17:49.132 23:14:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.132 23:14:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.132 23:14:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.132 23:14:11 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:49.132 23:14:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.132 23:14:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:49.132 23:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 23:14:11 -- nvmf/common.sh@520 -- # config=() 00:17:49.132 23:14:11 -- nvmf/common.sh@520 -- # local subsystem config 00:17:49.132 23:14:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:49.132 23:14:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:49.132 { 00:17:49.132 "params": { 00:17:49.132 "name": "Nvme$subsystem", 00:17:49.132 "trtype": "$TEST_TRANSPORT", 00:17:49.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.132 "adrfam": "ipv4", 00:17:49.132 "trsvcid": "$NVMF_PORT", 00:17:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.132 "hdgst": ${hdgst:-false}, 00:17:49.132 "ddgst": ${ddgst:-false} 00:17:49.132 }, 00:17:49.132 "method": "bdev_nvme_attach_controller" 00:17:49.132 } 00:17:49.132 EOF 00:17:49.132 )") 00:17:49.132 23:14:11 -- nvmf/common.sh@542 -- # cat 00:17:49.132 23:14:11 -- nvmf/common.sh@544 -- # jq . 00:17:49.132 23:14:11 -- nvmf/common.sh@545 -- # IFS=, 00:17:49.132 23:14:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:49.132 "params": { 00:17:49.132 "name": "Nvme0", 00:17:49.132 "trtype": "tcp", 00:17:49.132 "traddr": "10.0.0.2", 00:17:49.132 "adrfam": "ipv4", 00:17:49.132 "trsvcid": "4420", 00:17:49.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:49.132 "hdgst": false, 00:17:49.132 "ddgst": false 00:17:49.132 }, 00:17:49.132 "method": "bdev_nvme_attach_controller" 00:17:49.132 }' 00:17:49.392 [2024-06-07 23:14:11.835152] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:49.392 [2024-06-07 23:14:11.835205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799239 ] 00:17:49.392 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.392 [2024-06-07 23:14:11.895006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.392 [2024-06-07 23:14:11.924532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.652 Running I/O for 10 seconds... 00:17:50.224 23:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.224 23:14:12 -- common/autotest_common.sh@852 -- # return 0 00:17:50.224 23:14:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:50.224 23:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.224 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.224 23:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.224 23:14:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.224 23:14:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:50.224 23:14:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:50.224 23:14:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:50.224 23:14:12 -- target/host_management.sh@52 -- # local ret=1 00:17:50.225 23:14:12 -- target/host_management.sh@53 -- # local i 00:17:50.225 23:14:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:50.225 23:14:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:50.225 23:14:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:50.225 23:14:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:50.225 23:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.225 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.225 23:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.225 23:14:12 -- target/host_management.sh@55 -- # read_io_count=1564 00:17:50.225 23:14:12 -- target/host_management.sh@58 -- # '[' 1564 -ge 100 ']' 00:17:50.225 23:14:12 -- target/host_management.sh@59 -- # ret=0 00:17:50.225 23:14:12 -- target/host_management.sh@60 -- # break 00:17:50.225 23:14:12 -- target/host_management.sh@64 -- # return 0 00:17:50.225 23:14:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:50.225 23:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.225 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.225 [2024-06-07 23:14:12.686738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.686995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 [2024-06-07 23:14:12.687178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a86be0 is same with the state(5) to be set 00:17:50.225 23:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.225 23:14:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:50.225 23:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:50.225 23:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.225 [2024-06-07 23:14:12.692737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.226 [2024-06-07 23:14:12.692773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.226 [2024-06-07 23:14:12.692791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.226 [2024-06-07 23:14:12.692806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.226 [2024-06-07 23:14:12.692821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12474a0 is same with the state(5) to be set 00:17:50.226 [2024-06-07 23:14:12.692901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.692911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.692937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.692954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.692971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.692988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.692998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.226 [2024-06-07 23:14:12.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.226 [2024-06-07 23:14:12.693488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.693984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.693993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.227 [2024-06-07 23:14:12.694002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.694052] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1244cd0 was disconnected and freed. reset controller. 00:17:50.227 [2024-06-07 23:14:12.695241] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:50.227 task offset: 90752 on job bdev=Nvme0n1 fails 00:17:50.227 00:17:50.227 Latency(us) 00:17:50.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.227 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:50.227 Job: Nvme0n1 ended in about 0.59 seconds with error 00:17:50.227 Verification LBA range: start 0x0 length 0x400 00:17:50.227 Nvme0n1 : 0.59 2886.46 180.40 107.84 0.00 21065.10 1658.88 25777.49 00:17:50.227 =================================================================================================================== 00:17:50.227 Total : 2886.46 180.40 107.84 0.00 21065.10 1658.88 25777.49 00:17:50.227 [2024-06-07 23:14:12.697196] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.227 [2024-06-07 23:14:12.697217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12474a0 (9): Bad file descriptor 00:17:50.227 [2024-06-07 23:14:12.698617] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:50.227 [2024-06-07 23:14:12.698779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.227 [2024-06-07 23:14:12.698800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.227 [2024-06-07 23:14:12.698814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:50.227 [2024-06-07 23:14:12.698822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:50.227 [2024-06-07 23:14:12.698829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:50.227 [2024-06-07 23:14:12.698836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12474a0 00:17:50.227 [2024-06-07 23:14:12.698854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12474a0 (9): Bad file descriptor 00:17:50.228 [2024-06-07 23:14:12.698866] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:50.228 [2024-06-07 23:14:12.698873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:50.228 [2024-06-07 23:14:12.698882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:50.228 [2024-06-07 23:14:12.698894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:50.228 23:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:50.228 23:14:12 -- target/host_management.sh@87 -- # sleep 1 00:17:51.219 23:14:13 -- target/host_management.sh@91 -- # kill -9 2799239 00:17:51.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2799239) - No such process 00:17:51.219 23:14:13 -- target/host_management.sh@91 -- # true 00:17:51.219 23:14:13 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:51.219 23:14:13 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:51.219 23:14:13 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:51.219 23:14:13 -- nvmf/common.sh@520 -- # config=() 00:17:51.219 23:14:13 -- nvmf/common.sh@520 -- # local subsystem config 00:17:51.219 23:14:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:51.219 23:14:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:51.219 { 00:17:51.219 "params": { 00:17:51.219 "name": "Nvme$subsystem", 00:17:51.219 "trtype": "$TEST_TRANSPORT", 00:17:51.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.219 "adrfam": "ipv4", 00:17:51.219 "trsvcid": "$NVMF_PORT", 00:17:51.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.219 "hdgst": ${hdgst:-false}, 00:17:51.219 "ddgst": ${ddgst:-false} 00:17:51.219 }, 00:17:51.219 "method": "bdev_nvme_attach_controller" 00:17:51.219 } 00:17:51.219 EOF 00:17:51.219 )") 00:17:51.219 23:14:13 -- nvmf/common.sh@542 -- # cat 00:17:51.219 23:14:13 -- nvmf/common.sh@544 -- # jq . 00:17:51.219 23:14:13 -- nvmf/common.sh@545 -- # IFS=, 00:17:51.219 23:14:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:51.219 "params": { 00:17:51.219 "name": "Nvme0", 00:17:51.219 "trtype": "tcp", 00:17:51.219 "traddr": "10.0.0.2", 00:17:51.219 "adrfam": "ipv4", 00:17:51.219 "trsvcid": "4420", 00:17:51.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:51.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:51.219 "hdgst": false, 00:17:51.219 "ddgst": false 00:17:51.219 }, 00:17:51.219 "method": "bdev_nvme_attach_controller" 00:17:51.219 }' 00:17:51.219 [2024-06-07 23:14:13.769389] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:51.219 [2024-06-07 23:14:13.769454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799686 ] 00:17:51.219 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.219 [2024-06-07 23:14:13.830910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.219 [2024-06-07 23:14:13.858542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.503 Running I/O for 1 seconds... 00:17:52.444 00:17:52.444 Latency(us) 00:17:52.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.444 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:52.444 Verification LBA range: start 0x0 length 0x400 00:17:52.444 Nvme0n1 : 1.01 3347.14 209.20 0.00 0.00 18831.04 1884.16 25122.13 00:17:52.444 =================================================================================================================== 00:17:52.444 Total : 3347.14 209.20 0.00 0.00 18831.04 1884.16 25122.13 00:17:52.444 23:14:15 -- target/host_management.sh@101 -- # stoptarget 00:17:52.444 23:14:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:52.704 23:14:15 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:52.705 23:14:15 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:52.705 23:14:15 -- target/host_management.sh@40 -- # nvmftestfini 00:17:52.705 23:14:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:52.705 23:14:15 -- nvmf/common.sh@116 -- # sync 00:17:52.705 23:14:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:52.705 23:14:15 -- nvmf/common.sh@119 -- # set +e 00:17:52.705 23:14:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:52.705 23:14:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:52.705 rmmod nvme_tcp 00:17:52.705 rmmod nvme_fabrics 00:17:52.705 rmmod nvme_keyring 00:17:52.705 23:14:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:52.705 23:14:15 -- nvmf/common.sh@123 -- # set -e 00:17:52.705 23:14:15 -- nvmf/common.sh@124 -- # return 0 00:17:52.705 23:14:15 -- nvmf/common.sh@477 -- # '[' -n 2799020 ']' 00:17:52.705 23:14:15 -- nvmf/common.sh@478 -- # killprocess 2799020 00:17:52.705 23:14:15 -- common/autotest_common.sh@926 -- # '[' -z 2799020 ']' 00:17:52.705 23:14:15 -- common/autotest_common.sh@930 -- # kill -0 2799020 00:17:52.705 23:14:15 -- common/autotest_common.sh@931 -- # uname 00:17:52.705 23:14:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:52.705 23:14:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2799020 00:17:52.705 23:14:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:52.705 23:14:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:52.705 23:14:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2799020' 00:17:52.705 killing process with pid 2799020 00:17:52.705 23:14:15 -- common/autotest_common.sh@945 -- # kill 2799020 00:17:52.705 23:14:15 -- common/autotest_common.sh@950 -- # wait 2799020 00:17:52.705 [2024-06-07 23:14:15.360431] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:52.705 23:14:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:52.705 23:14:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:52.705 23:14:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:52.705 23:14:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:52.705 23:14:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:52.705 23:14:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.705 23:14:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.705 23:14:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.252 23:14:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:55.252 00:17:55.252 real 0m6.656s 00:17:55.252 user 0m19.751s 00:17:55.252 sys 0m1.130s 00:17:55.252 23:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.252 23:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 ************************************ 00:17:55.252 END TEST nvmf_host_management 00:17:55.252 ************************************ 00:17:55.252 23:14:17 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:55.252 00:17:55.252 real 0m14.211s 00:17:55.252 user 0m21.851s 00:17:55.252 sys 0m6.532s 00:17:55.252 23:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.252 23:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 ************************************ 00:17:55.252 END TEST nvmf_host_management 00:17:55.252 ************************************ 00:17:55.252 23:14:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:55.252 23:14:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:55.252 23:14:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:55.252 23:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 ************************************ 00:17:55.252 START TEST nvmf_lvol 00:17:55.252 ************************************ 00:17:55.252 23:14:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:55.252 * Looking for test storage... 00:17:55.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.252 23:14:17 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.252 23:14:17 -- nvmf/common.sh@7 -- # uname -s 00:17:55.252 23:14:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.252 23:14:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.252 23:14:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.252 23:14:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.252 23:14:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.252 23:14:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.252 23:14:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.252 23:14:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.252 23:14:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.252 23:14:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.252 23:14:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.252 23:14:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.252 23:14:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.252 23:14:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.252 23:14:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.252 23:14:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.253 23:14:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.253 23:14:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.253 23:14:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.253 23:14:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.253 23:14:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.253 23:14:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.253 23:14:17 -- paths/export.sh@5 -- # export PATH 00:17:55.253 23:14:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.253 23:14:17 -- nvmf/common.sh@46 -- # : 0 00:17:55.253 23:14:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:55.253 23:14:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:55.253 23:14:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:55.253 23:14:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.253 23:14:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.253 23:14:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:55.253 23:14:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:55.253 23:14:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.253 23:14:17 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:55.253 23:14:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:55.253 23:14:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.253 23:14:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:55.253 23:14:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:55.253 23:14:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:55.253 23:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.253 23:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.253 23:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.253 23:14:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:55.253 23:14:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:55.253 23:14:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:55.253 23:14:17 -- common/autotest_common.sh@10 -- # set +x 00:18:03.402 23:14:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:03.402 23:14:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:03.402 23:14:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:03.402 23:14:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:03.402 23:14:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:03.402 23:14:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:03.402 23:14:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:03.402 23:14:24 -- nvmf/common.sh@294 -- # net_devs=() 00:18:03.402 23:14:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:03.402 23:14:24 -- nvmf/common.sh@295 -- # e810=() 00:18:03.402 23:14:24 -- nvmf/common.sh@295 -- # local -ga e810 00:18:03.402 23:14:24 -- nvmf/common.sh@296 -- # x722=() 00:18:03.402 23:14:24 -- nvmf/common.sh@296 -- # local -ga x722 00:18:03.402 23:14:24 -- nvmf/common.sh@297 -- # mlx=() 00:18:03.402 23:14:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:03.402 23:14:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.402 23:14:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.402 23:14:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:03.402 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:03.402 23:14:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.402 23:14:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:03.402 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:03.402 23:14:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.402 23:14:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.402 23:14:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.402 23:14:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:03.402 Found net devices under 0000:31:00.0: cvl_0_0 00:18:03.402 23:14:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.402 23:14:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.402 23:14:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.402 23:14:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:03.402 Found net devices under 0000:31:00.1: cvl_0_1 00:18:03.402 23:14:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:03.402 23:14:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:03.402 23:14:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.402 23:14:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.402 23:14:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:03.402 23:14:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.402 23:14:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.402 23:14:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:03.402 23:14:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.402 23:14:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.402 23:14:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:03.402 23:14:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:03.402 23:14:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.402 23:14:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.402 23:14:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.402 23:14:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.402 23:14:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:03.402 23:14:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.402 23:14:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.402 23:14:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.402 23:14:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:03.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:18:03.402 00:18:03.402 --- 10.0.0.2 ping statistics --- 00:18:03.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.402 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:18:03.402 23:14:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:18:03.402 00:18:03.402 --- 10.0.0.1 ping statistics --- 00:18:03.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.402 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:18:03.402 23:14:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.402 23:14:24 -- nvmf/common.sh@410 -- # return 0 00:18:03.402 23:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.402 23:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.402 23:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:03.402 23:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.403 23:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:03.403 23:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:03.403 23:14:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:03.403 23:14:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.403 23:14:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:03.403 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 23:14:24 -- nvmf/common.sh@469 -- # nvmfpid=2804190 00:18:03.403 23:14:24 -- nvmf/common.sh@470 -- # waitforlisten 2804190 00:18:03.403 23:14:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:03.403 23:14:24 -- common/autotest_common.sh@819 -- # '[' -z 2804190 ']' 00:18:03.403 23:14:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.403 23:14:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.403 23:14:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.403 23:14:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.403 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 [2024-06-07 23:14:24.972561] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:03.403 [2024-06-07 23:14:24.972624] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.403 [2024-06-07 23:14:25.045027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.403 [2024-06-07 23:14:25.083118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.403 [2024-06-07 23:14:25.083260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.403 [2024-06-07 23:14:25.083274] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.403 [2024-06-07 23:14:25.083281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.403 [2024-06-07 23:14:25.083372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.403 [2024-06-07 23:14:25.083474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.403 [2024-06-07 23:14:25.083475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.403 23:14:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:03.403 23:14:25 -- common/autotest_common.sh@852 -- # return 0 00:18:03.403 23:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:03.403 23:14:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:03.403 23:14:25 -- common/autotest_common.sh@10 -- # set +x 00:18:03.403 23:14:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.403 23:14:25 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:03.403 [2024-06-07 23:14:25.919140] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.403 23:14:25 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:03.663 23:14:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:03.663 23:14:26 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:03.663 23:14:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:03.663 23:14:26 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:03.924 23:14:26 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:04.185 23:14:26 -- target/nvmf_lvol.sh@29 -- # lvs=dab61dce-21bc-47b5-a57f-68ab082cef57 00:18:04.185 23:14:26 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dab61dce-21bc-47b5-a57f-68ab082cef57 lvol 20 00:18:04.185 23:14:26 -- target/nvmf_lvol.sh@32 -- # lvol=45c9f684-c600-45dc-acfb-3dd6c123f017 00:18:04.185 23:14:26 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:04.446 23:14:26 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45c9f684-c600-45dc-acfb-3dd6c123f017 00:18:04.707 23:14:27 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.707 [2024-06-07 23:14:27.273885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.707 23:14:27 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:04.968 23:14:27 -- target/nvmf_lvol.sh@42 -- # perf_pid=2804674 00:18:04.968 23:14:27 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:04.968 23:14:27 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:04.968 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.911 23:14:28 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 45c9f684-c600-45dc-acfb-3dd6c123f017 MY_SNAPSHOT 00:18:06.171 23:14:28 -- target/nvmf_lvol.sh@47 -- # snapshot=e7084149-9e90-419c-8d09-c6b515dc9cca 00:18:06.171 23:14:28 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 45c9f684-c600-45dc-acfb-3dd6c123f017 30 00:18:06.171 23:14:28 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e7084149-9e90-419c-8d09-c6b515dc9cca MY_CLONE 00:18:06.432 23:14:29 -- target/nvmf_lvol.sh@49 -- # clone=5c4afa7a-d6df-4692-a2ef-07edaef4ff7f 00:18:06.432 23:14:29 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5c4afa7a-d6df-4692-a2ef-07edaef4ff7f 00:18:06.693 23:14:29 -- target/nvmf_lvol.sh@53 -- # wait 2804674 00:18:16.696 Initializing NVMe Controllers 00:18:16.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:16.696 Controller IO queue size 128, less than required. 00:18:16.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:16.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:16.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:16.696 Initialization complete. Launching workers. 00:18:16.696 ======================================================== 00:18:16.696 Latency(us) 00:18:16.696 Device Information : IOPS MiB/s Average min max 00:18:16.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12257.50 47.88 10444.98 1442.18 53023.70 00:18:16.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18145.69 70.88 7053.91 390.40 34707.07 00:18:16.696 ======================================================== 00:18:16.696 Total : 30403.19 118.76 8421.07 390.40 53023.70 00:18:16.696 00:18:16.696 23:14:37 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:16.696 23:14:37 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 45c9f684-c600-45dc-acfb-3dd6c123f017 00:18:16.696 23:14:38 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dab61dce-21bc-47b5-a57f-68ab082cef57 00:18:16.696 23:14:38 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:16.696 23:14:38 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:16.696 23:14:38 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:16.696 23:14:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:16.696 23:14:38 -- nvmf/common.sh@116 -- # sync 00:18:16.696 23:14:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:16.696 23:14:38 -- nvmf/common.sh@119 -- # set +e 00:18:16.696 23:14:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:16.696 23:14:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:16.696 rmmod nvme_tcp 00:18:16.696 rmmod nvme_fabrics 00:18:16.696 rmmod nvme_keyring 00:18:16.696 23:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:16.696 23:14:38 -- nvmf/common.sh@123 -- # set -e 00:18:16.696 23:14:38 -- nvmf/common.sh@124 -- # return 0 00:18:16.696 23:14:38 -- nvmf/common.sh@477 -- # '[' -n 2804190 ']' 00:18:16.696 23:14:38 -- nvmf/common.sh@478 -- # killprocess 2804190 00:18:16.696 23:14:38 -- common/autotest_common.sh@926 -- # '[' -z 2804190 ']' 00:18:16.696 23:14:38 -- common/autotest_common.sh@930 -- # kill -0 2804190 00:18:16.696 23:14:38 -- common/autotest_common.sh@931 -- # uname 00:18:16.696 23:14:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:16.696 23:14:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2804190 00:18:16.696 23:14:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:16.696 23:14:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:16.696 23:14:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2804190' 00:18:16.696 killing process with pid 2804190 00:18:16.696 23:14:38 -- common/autotest_common.sh@945 -- # kill 2804190 00:18:16.696 23:14:38 -- common/autotest_common.sh@950 -- # wait 2804190 00:18:16.696 23:14:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.696 23:14:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.696 23:14:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.696 23:14:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.696 23:14:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.696 23:14:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.696 23:14:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.696 23:14:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.079 23:14:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:18.079 00:18:18.079 real 0m23.048s 00:18:18.079 user 1m3.214s 00:18:18.079 sys 0m7.716s 00:18:18.079 23:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.079 23:14:40 -- common/autotest_common.sh@10 -- # set +x 00:18:18.079 ************************************ 00:18:18.079 END TEST nvmf_lvol 00:18:18.079 ************************************ 00:18:18.079 23:14:40 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:18.079 23:14:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:18.079 23:14:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.079 23:14:40 -- common/autotest_common.sh@10 -- # set +x 00:18:18.079 ************************************ 00:18:18.079 START TEST nvmf_lvs_grow 00:18:18.079 ************************************ 00:18:18.079 23:14:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:18.079 * Looking for test storage... 00:18:18.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.079 23:14:40 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.079 23:14:40 -- nvmf/common.sh@7 -- # uname -s 00:18:18.079 23:14:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.079 23:14:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.079 23:14:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.079 23:14:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.079 23:14:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.079 23:14:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.079 23:14:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.079 23:14:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.079 23:14:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.079 23:14:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.079 23:14:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.079 23:14:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.079 23:14:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.079 23:14:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.079 23:14:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.079 23:14:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.079 23:14:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.079 23:14:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.079 23:14:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.079 23:14:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.079 23:14:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.080 23:14:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.080 23:14:40 -- paths/export.sh@5 -- # export PATH 00:18:18.080 23:14:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.080 23:14:40 -- nvmf/common.sh@46 -- # : 0 00:18:18.080 23:14:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.080 23:14:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.080 23:14:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.080 23:14:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.080 23:14:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.080 23:14:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.080 23:14:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.080 23:14:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.080 23:14:40 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.080 23:14:40 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.080 23:14:40 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:18.080 23:14:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.080 23:14:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.080 23:14:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.080 23:14:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.080 23:14:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.080 23:14:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.080 23:14:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.080 23:14:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.080 23:14:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:18.080 23:14:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:18.080 23:14:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:18.080 23:14:40 -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 23:14:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.223 23:14:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.223 23:14:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.223 23:14:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.223 23:14:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.223 23:14:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.223 23:14:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.223 23:14:47 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.223 23:14:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.223 23:14:47 -- nvmf/common.sh@295 -- # e810=() 00:18:26.223 23:14:47 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.223 23:14:47 -- nvmf/common.sh@296 -- # x722=() 00:18:26.223 23:14:47 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.223 23:14:47 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.223 23:14:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.223 23:14:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.223 23:14:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.223 23:14:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.223 23:14:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.223 23:14:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:26.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:26.223 23:14:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.223 23:14:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:26.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:26.223 23:14:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.223 23:14:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.223 23:14:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.223 23:14:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:26.223 Found net devices under 0000:31:00.0: cvl_0_0 00:18:26.223 23:14:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.223 23:14:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.223 23:14:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.223 23:14:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.223 23:14:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:26.223 Found net devices under 0000:31:00.1: cvl_0_1 00:18:26.223 23:14:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.223 23:14:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.223 23:14:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.223 23:14:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.223 23:14:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.223 23:14:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.223 23:14:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.223 23:14:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.223 23:14:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.223 23:14:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.223 23:14:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.223 23:14:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.223 23:14:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.223 23:14:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.223 23:14:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.223 23:14:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.223 23:14:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.223 23:14:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.223 23:14:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.224 23:14:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.224 23:14:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.224 23:14:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.224 23:14:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.224 23:14:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:18:26.224 00:18:26.224 --- 10.0.0.2 ping statistics --- 00:18:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.224 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:18:26.224 23:14:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:18:26.224 00:18:26.224 --- 10.0.0.1 ping statistics --- 00:18:26.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.224 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:18:26.224 23:14:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.224 23:14:48 -- nvmf/common.sh@410 -- # return 0 00:18:26.224 23:14:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.224 23:14:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.224 23:14:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.224 23:14:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.224 23:14:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.224 23:14:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.224 23:14:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.224 23:14:48 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:26.224 23:14:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.224 23:14:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:26.224 23:14:48 -- common/autotest_common.sh@10 -- # set +x 00:18:26.224 23:14:48 -- nvmf/common.sh@469 -- # nvmfpid=2811020 00:18:26.224 23:14:48 -- nvmf/common.sh@470 -- # waitforlisten 2811020 00:18:26.224 23:14:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:26.224 23:14:48 -- common/autotest_common.sh@819 -- # '[' -z 2811020 ']' 00:18:26.224 23:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.224 23:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:26.224 23:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.224 23:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:26.224 23:14:48 -- common/autotest_common.sh@10 -- # set +x 00:18:26.224 [2024-06-07 23:14:48.145600] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:26.224 [2024-06-07 23:14:48.145661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.224 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.224 [2024-06-07 23:14:48.217622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.224 [2024-06-07 23:14:48.255290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.224 [2024-06-07 23:14:48.255425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.224 [2024-06-07 23:14:48.255433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.224 [2024-06-07 23:14:48.255440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.224 [2024-06-07 23:14:48.255461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.485 23:14:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:26.485 23:14:48 -- common/autotest_common.sh@852 -- # return 0 00:18:26.485 23:14:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:26.485 23:14:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:26.485 23:14:48 -- common/autotest_common.sh@10 -- # set +x 00:18:26.485 23:14:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.485 23:14:48 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:26.485 [2024-06-07 23:14:49.078783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:26.485 23:14:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:26.485 23:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.485 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:18:26.485 ************************************ 00:18:26.485 START TEST lvs_grow_clean 00:18:26.485 ************************************ 00:18:26.485 23:14:49 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:26.485 23:14:49 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:26.745 23:14:49 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:26.745 23:14:49 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@28 -- # lvs=981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:27.006 23:14:49 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e lvol 150 00:18:27.266 23:14:49 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8fdd4e2a-f5f9-46a8-8197-bb390658956a 00:18:27.266 23:14:49 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.266 23:14:49 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:27.266 [2024-06-07 23:14:49.904836] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:27.266 [2024-06-07 23:14:49.904888] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:27.266 true 00:18:27.266 23:14:49 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:27.266 23:14:49 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:27.527 23:14:50 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:27.527 23:14:50 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:27.787 23:14:50 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8fdd4e2a-f5f9-46a8-8197-bb390658956a 00:18:27.787 23:14:50 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.048 [2024-06-07 23:14:50.526940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.048 23:14:50 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.048 23:14:50 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2811725 00:18:28.048 23:14:50 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.048 23:14:50 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:28.048 23:14:50 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2811725 /var/tmp/bdevperf.sock 00:18:28.048 23:14:50 -- common/autotest_common.sh@819 -- # '[' -z 2811725 ']' 00:18:28.048 23:14:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.048 23:14:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.048 23:14:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.048 23:14:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.048 23:14:50 -- common/autotest_common.sh@10 -- # set +x 00:18:28.308 [2024-06-07 23:14:50.734431] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:28.308 [2024-06-07 23:14:50.734482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811725 ] 00:18:28.308 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.308 [2024-06-07 23:14:50.810003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.308 [2024-06-07 23:14:50.838746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.879 23:14:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.879 23:14:51 -- common/autotest_common.sh@852 -- # return 0 00:18:28.879 23:14:51 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:29.450 Nvme0n1 00:18:29.450 23:14:51 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:29.450 [ 00:18:29.450 { 00:18:29.450 "name": "Nvme0n1", 00:18:29.450 "aliases": [ 00:18:29.450 "8fdd4e2a-f5f9-46a8-8197-bb390658956a" 00:18:29.450 ], 00:18:29.450 "product_name": "NVMe disk", 00:18:29.450 "block_size": 4096, 00:18:29.450 "num_blocks": 38912, 00:18:29.450 "uuid": "8fdd4e2a-f5f9-46a8-8197-bb390658956a", 00:18:29.450 "assigned_rate_limits": { 00:18:29.450 "rw_ios_per_sec": 0, 00:18:29.450 "rw_mbytes_per_sec": 0, 00:18:29.450 "r_mbytes_per_sec": 0, 00:18:29.450 "w_mbytes_per_sec": 0 00:18:29.450 }, 00:18:29.450 "claimed": false, 00:18:29.450 "zoned": false, 00:18:29.450 "supported_io_types": { 00:18:29.450 "read": true, 00:18:29.450 "write": true, 00:18:29.450 "unmap": true, 00:18:29.450 "write_zeroes": true, 00:18:29.450 "flush": true, 00:18:29.450 "reset": true, 00:18:29.450 "compare": true, 00:18:29.450 "compare_and_write": true, 00:18:29.450 "abort": true, 00:18:29.450 "nvme_admin": true, 00:18:29.450 "nvme_io": true 00:18:29.450 }, 00:18:29.450 "driver_specific": { 00:18:29.450 "nvme": [ 00:18:29.450 { 00:18:29.450 "trid": { 00:18:29.450 "trtype": "TCP", 00:18:29.450 "adrfam": "IPv4", 00:18:29.450 "traddr": "10.0.0.2", 00:18:29.450 "trsvcid": "4420", 00:18:29.450 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:29.450 }, 00:18:29.450 "ctrlr_data": { 00:18:29.450 "cntlid": 1, 00:18:29.450 "vendor_id": "0x8086", 00:18:29.450 "model_number": "SPDK bdev Controller", 00:18:29.450 "serial_number": "SPDK0", 00:18:29.450 "firmware_revision": "24.01.1", 00:18:29.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:29.450 "oacs": { 00:18:29.450 "security": 0, 00:18:29.450 "format": 0, 00:18:29.450 "firmware": 0, 00:18:29.450 "ns_manage": 0 00:18:29.450 }, 00:18:29.450 "multi_ctrlr": true, 00:18:29.450 "ana_reporting": false 00:18:29.450 }, 00:18:29.450 "vs": { 00:18:29.450 "nvme_version": "1.3" 00:18:29.450 }, 00:18:29.450 "ns_data": { 00:18:29.450 "id": 1, 00:18:29.450 "can_share": true 00:18:29.450 } 00:18:29.450 } 00:18:29.450 ], 00:18:29.450 "mp_policy": "active_passive" 00:18:29.450 } 00:18:29.450 } 00:18:29.450 ] 00:18:29.450 23:14:52 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.450 23:14:52 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2811947 00:18:29.450 23:14:52 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:29.450 Running I/O for 10 seconds... 00:18:30.392 Latency(us) 00:18:30.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.392 Nvme0n1 : 1.00 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:18:30.392 =================================================================================================================== 00:18:30.392 Total : 18563.00 72.51 0.00 0.00 0.00 0.00 0.00 00:18:30.392 00:18:31.333 23:14:54 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:31.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.594 Nvme0n1 : 2.00 18661.00 72.89 0.00 0.00 0.00 0.00 0.00 00:18:31.594 =================================================================================================================== 00:18:31.594 Total : 18661.00 72.89 0.00 0.00 0.00 0.00 0.00 00:18:31.594 00:18:31.594 true 00:18:31.594 23:14:54 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:31.594 23:14:54 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:31.855 23:14:54 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:31.855 23:14:54 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:31.855 23:14:54 -- target/nvmf_lvs_grow.sh@65 -- # wait 2811947 00:18:32.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.426 Nvme0n1 : 3.00 18712.67 73.10 0.00 0.00 0.00 0.00 0.00 00:18:32.426 =================================================================================================================== 00:18:32.426 Total : 18712.67 73.10 0.00 0.00 0.00 0.00 0.00 00:18:32.426 00:18:33.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.811 Nvme0n1 : 4.00 18738.50 73.20 0.00 0.00 0.00 0.00 0.00 00:18:33.811 =================================================================================================================== 00:18:33.811 Total : 18738.50 73.20 0.00 0.00 0.00 0.00 0.00 00:18:33.811 00:18:34.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.753 Nvme0n1 : 5.00 18766.80 73.31 0.00 0.00 0.00 0.00 0.00 00:18:34.753 =================================================================================================================== 00:18:34.753 Total : 18766.80 73.31 0.00 0.00 0.00 0.00 0.00 00:18:34.753 00:18:35.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.695 Nvme0n1 : 6.00 18785.50 73.38 0.00 0.00 0.00 0.00 0.00 00:18:35.695 =================================================================================================================== 00:18:35.695 Total : 18785.50 73.38 0.00 0.00 0.00 0.00 0.00 00:18:35.695 00:18:36.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.637 Nvme0n1 : 7.00 18799.14 73.43 0.00 0.00 0.00 0.00 0.00 00:18:36.637 =================================================================================================================== 00:18:36.637 Total : 18799.14 73.43 0.00 0.00 0.00 0.00 0.00 00:18:36.637 00:18:37.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.657 Nvme0n1 : 8.00 18809.12 73.47 0.00 0.00 0.00 0.00 0.00 00:18:37.657 =================================================================================================================== 00:18:37.657 Total : 18809.12 73.47 0.00 0.00 0.00 0.00 0.00 00:18:37.657 00:18:38.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.637 Nvme0n1 : 9.00 18817.11 73.50 0.00 0.00 0.00 0.00 0.00 00:18:38.637 =================================================================================================================== 00:18:38.637 Total : 18817.11 73.50 0.00 0.00 0.00 0.00 0.00 00:18:38.637 00:18:39.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.578 Nvme0n1 : 10.00 18823.30 73.53 0.00 0.00 0.00 0.00 0.00 00:18:39.579 =================================================================================================================== 00:18:39.579 Total : 18823.30 73.53 0.00 0.00 0.00 0.00 0.00 00:18:39.579 00:18:39.579 00:18:39.579 Latency(us) 00:18:39.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.579 Nvme0n1 : 10.00 18822.35 73.52 0.00 0.00 6796.60 4259.84 15182.51 00:18:39.579 =================================================================================================================== 00:18:39.579 Total : 18822.35 73.52 0.00 0.00 6796.60 4259.84 15182.51 00:18:39.579 0 00:18:39.579 23:15:02 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2811725 00:18:39.579 23:15:02 -- common/autotest_common.sh@926 -- # '[' -z 2811725 ']' 00:18:39.579 23:15:02 -- common/autotest_common.sh@930 -- # kill -0 2811725 00:18:39.579 23:15:02 -- common/autotest_common.sh@931 -- # uname 00:18:39.579 23:15:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:39.579 23:15:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2811725 00:18:39.579 23:15:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:39.579 23:15:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:39.579 23:15:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2811725' 00:18:39.579 killing process with pid 2811725 00:18:39.579 23:15:02 -- common/autotest_common.sh@945 -- # kill 2811725 00:18:39.579 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.579 00:18:39.579 Latency(us) 00:18:39.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.579 =================================================================================================================== 00:18:39.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.579 23:15:02 -- common/autotest_common.sh@950 -- # wait 2811725 00:18:39.839 23:15:02 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:39.839 23:15:02 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:39.839 23:15:02 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:40.097 23:15:02 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:40.097 23:15:02 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:40.097 23:15:02 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:40.097 [2024-06-07 23:15:02.735354] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:40.097 23:15:02 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:40.097 23:15:02 -- common/autotest_common.sh@640 -- # local es=0 00:18:40.097 23:15:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:40.097 23:15:02 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.097 23:15:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.097 23:15:02 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.097 23:15:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.097 23:15:02 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.097 23:15:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.097 23:15:02 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:40.097 23:15:02 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:40.097 23:15:02 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:40.357 request: 00:18:40.357 { 00:18:40.357 "uuid": "981da135-1da6-4a5a-a2d5-3c5979b6a89e", 00:18:40.357 "method": "bdev_lvol_get_lvstores", 00:18:40.357 "req_id": 1 00:18:40.357 } 00:18:40.357 Got JSON-RPC error response 00:18:40.357 response: 00:18:40.357 { 00:18:40.357 "code": -19, 00:18:40.357 "message": "No such device" 00:18:40.357 } 00:18:40.357 23:15:02 -- common/autotest_common.sh@643 -- # es=1 00:18:40.357 23:15:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:40.357 23:15:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:40.357 23:15:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:40.357 23:15:02 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:40.615 aio_bdev 00:18:40.615 23:15:03 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8fdd4e2a-f5f9-46a8-8197-bb390658956a 00:18:40.615 23:15:03 -- common/autotest_common.sh@887 -- # local bdev_name=8fdd4e2a-f5f9-46a8-8197-bb390658956a 00:18:40.615 23:15:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:40.615 23:15:03 -- common/autotest_common.sh@889 -- # local i 00:18:40.615 23:15:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:40.615 23:15:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:40.615 23:15:03 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:40.615 23:15:03 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8fdd4e2a-f5f9-46a8-8197-bb390658956a -t 2000 00:18:40.874 [ 00:18:40.874 { 00:18:40.874 "name": "8fdd4e2a-f5f9-46a8-8197-bb390658956a", 00:18:40.874 "aliases": [ 00:18:40.874 "lvs/lvol" 00:18:40.874 ], 00:18:40.874 "product_name": "Logical Volume", 00:18:40.874 "block_size": 4096, 00:18:40.874 "num_blocks": 38912, 00:18:40.874 "uuid": "8fdd4e2a-f5f9-46a8-8197-bb390658956a", 00:18:40.874 "assigned_rate_limits": { 00:18:40.874 "rw_ios_per_sec": 0, 00:18:40.874 "rw_mbytes_per_sec": 0, 00:18:40.874 "r_mbytes_per_sec": 0, 00:18:40.874 "w_mbytes_per_sec": 0 00:18:40.874 }, 00:18:40.874 "claimed": false, 00:18:40.874 "zoned": false, 00:18:40.874 "supported_io_types": { 00:18:40.874 "read": true, 00:18:40.874 "write": true, 00:18:40.874 "unmap": true, 00:18:40.874 "write_zeroes": true, 00:18:40.874 "flush": false, 00:18:40.874 "reset": true, 00:18:40.874 "compare": false, 00:18:40.874 "compare_and_write": false, 00:18:40.874 "abort": false, 00:18:40.874 "nvme_admin": false, 00:18:40.874 "nvme_io": false 00:18:40.874 }, 00:18:40.874 "driver_specific": { 00:18:40.874 "lvol": { 00:18:40.874 "lvol_store_uuid": "981da135-1da6-4a5a-a2d5-3c5979b6a89e", 00:18:40.874 "base_bdev": "aio_bdev", 00:18:40.874 "thin_provision": false, 00:18:40.874 "snapshot": false, 00:18:40.874 "clone": false, 00:18:40.874 "esnap_clone": false 00:18:40.874 } 00:18:40.874 } 00:18:40.874 } 00:18:40.874 ] 00:18:40.874 23:15:03 -- common/autotest_common.sh@895 -- # return 0 00:18:40.874 23:15:03 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:40.874 23:15:03 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:40.874 23:15:03 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:40.874 23:15:03 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:40.874 23:15:03 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:41.134 23:15:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:41.134 23:15:03 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8fdd4e2a-f5f9-46a8-8197-bb390658956a 00:18:41.134 23:15:03 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 981da135-1da6-4a5a-a2d5-3c5979b6a89e 00:18:41.394 23:15:03 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.654 00:18:41.654 real 0m15.005s 00:18:41.654 user 0m14.738s 00:18:41.654 sys 0m1.164s 00:18:41.654 23:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.654 23:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:41.654 ************************************ 00:18:41.654 END TEST lvs_grow_clean 00:18:41.654 ************************************ 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:41.654 23:15:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:41.654 23:15:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:41.654 23:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:41.654 ************************************ 00:18:41.654 START TEST lvs_grow_dirty 00:18:41.654 ************************************ 00:18:41.654 23:15:04 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.654 23:15:04 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:41.914 23:15:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:41.914 23:15:04 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:41.914 23:15:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d1805c9a-803c-47b4-acfd-e418478404fd 00:18:41.914 23:15:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:41.914 23:15:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d1805c9a-803c-47b4-acfd-e418478404fd lvol 150 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.174 23:15:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:42.435 [2024-06-07 23:15:04.921668] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:42.435 [2024-06-07 23:15:04.921719] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:42.435 true 00:18:42.435 23:15:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:42.435 23:15:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:42.435 23:15:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:42.435 23:15:05 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:42.695 23:15:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:42.695 23:15:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:42.955 23:15:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:43.216 23:15:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2814688 00:18:43.216 23:15:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.216 23:15:05 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:43.216 23:15:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2814688 /var/tmp/bdevperf.sock 00:18:43.216 23:15:05 -- common/autotest_common.sh@819 -- # '[' -z 2814688 ']' 00:18:43.216 23:15:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.216 23:15:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.216 23:15:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.216 23:15:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.216 23:15:05 -- common/autotest_common.sh@10 -- # set +x 00:18:43.216 [2024-06-07 23:15:05.694998] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:43.216 [2024-06-07 23:15:05.695048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814688 ] 00:18:43.216 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.216 [2024-06-07 23:15:05.745576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.216 [2024-06-07 23:15:05.772556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.158 23:15:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.158 23:15:06 -- common/autotest_common.sh@852 -- # return 0 00:18:44.158 23:15:06 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:44.158 Nvme0n1 00:18:44.158 23:15:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:44.158 [ 00:18:44.158 { 00:18:44.158 "name": "Nvme0n1", 00:18:44.158 "aliases": [ 00:18:44.158 "17920659-d9fe-4c2d-ad67-e465eff7b911" 00:18:44.158 ], 00:18:44.158 "product_name": "NVMe disk", 00:18:44.158 "block_size": 4096, 00:18:44.158 "num_blocks": 38912, 00:18:44.158 "uuid": "17920659-d9fe-4c2d-ad67-e465eff7b911", 00:18:44.158 "assigned_rate_limits": { 00:18:44.158 "rw_ios_per_sec": 0, 00:18:44.158 "rw_mbytes_per_sec": 0, 00:18:44.158 "r_mbytes_per_sec": 0, 00:18:44.158 "w_mbytes_per_sec": 0 00:18:44.158 }, 00:18:44.158 "claimed": false, 00:18:44.158 "zoned": false, 00:18:44.158 "supported_io_types": { 00:18:44.158 "read": true, 00:18:44.158 "write": true, 00:18:44.158 "unmap": true, 00:18:44.158 "write_zeroes": true, 00:18:44.158 "flush": true, 00:18:44.158 "reset": true, 00:18:44.158 "compare": true, 00:18:44.158 "compare_and_write": true, 00:18:44.158 "abort": true, 00:18:44.158 "nvme_admin": true, 00:18:44.158 "nvme_io": true 00:18:44.158 }, 00:18:44.158 "driver_specific": { 00:18:44.158 "nvme": [ 00:18:44.158 { 00:18:44.158 "trid": { 00:18:44.158 "trtype": "TCP", 00:18:44.158 "adrfam": "IPv4", 00:18:44.158 "traddr": "10.0.0.2", 00:18:44.158 "trsvcid": "4420", 00:18:44.158 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:44.158 }, 00:18:44.158 "ctrlr_data": { 00:18:44.158 "cntlid": 1, 00:18:44.158 "vendor_id": "0x8086", 00:18:44.158 "model_number": "SPDK bdev Controller", 00:18:44.158 "serial_number": "SPDK0", 00:18:44.158 "firmware_revision": "24.01.1", 00:18:44.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:44.158 "oacs": { 00:18:44.158 "security": 0, 00:18:44.158 "format": 0, 00:18:44.158 "firmware": 0, 00:18:44.158 "ns_manage": 0 00:18:44.158 }, 00:18:44.158 "multi_ctrlr": true, 00:18:44.158 "ana_reporting": false 00:18:44.158 }, 00:18:44.158 "vs": { 00:18:44.158 "nvme_version": "1.3" 00:18:44.158 }, 00:18:44.158 "ns_data": { 00:18:44.158 "id": 1, 00:18:44.158 "can_share": true 00:18:44.158 } 00:18:44.158 } 00:18:44.158 ], 00:18:44.158 "mp_policy": "active_passive" 00:18:44.158 } 00:18:44.158 } 00:18:44.158 ] 00:18:44.418 23:15:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2814963 00:18:44.418 23:15:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:44.418 23:15:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.418 Running I/O for 10 seconds... 00:18:45.360 Latency(us) 00:18:45.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.360 Nvme0n1 : 1.00 18698.00 73.04 0.00 0.00 0.00 0.00 0.00 00:18:45.360 =================================================================================================================== 00:18:45.360 Total : 18698.00 73.04 0.00 0.00 0.00 0.00 0.00 00:18:45.360 00:18:46.302 23:15:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:46.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.302 Nvme0n1 : 2.00 18821.00 73.52 0.00 0.00 0.00 0.00 0.00 00:18:46.302 =================================================================================================================== 00:18:46.302 Total : 18821.00 73.52 0.00 0.00 0.00 0.00 0.00 00:18:46.302 00:18:46.563 true 00:18:46.563 23:15:09 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:46.563 23:15:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:46.563 23:15:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:46.563 23:15:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:46.563 23:15:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 2814963 00:18:47.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.505 Nvme0n1 : 3.00 18845.00 73.61 0.00 0.00 0.00 0.00 0.00 00:18:47.505 =================================================================================================================== 00:18:47.505 Total : 18845.00 73.61 0.00 0.00 0.00 0.00 0.00 00:18:47.505 00:18:48.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.447 Nvme0n1 : 4.00 18822.00 73.52 0.00 0.00 0.00 0.00 0.00 00:18:48.447 =================================================================================================================== 00:18:48.447 Total : 18822.00 73.52 0.00 0.00 0.00 0.00 0.00 00:18:48.447 00:18:49.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.390 Nvme0n1 : 5.00 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:18:49.390 =================================================================================================================== 00:18:49.390 Total : 18859.00 73.67 0.00 0.00 0.00 0.00 0.00 00:18:49.390 00:18:50.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.334 Nvme0n1 : 6.00 18884.00 73.77 0.00 0.00 0.00 0.00 0.00 00:18:50.334 =================================================================================================================== 00:18:50.334 Total : 18884.00 73.77 0.00 0.00 0.00 0.00 0.00 00:18:50.334 00:18:51.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.276 Nvme0n1 : 7.00 18910.86 73.87 0.00 0.00 0.00 0.00 0.00 00:18:51.276 =================================================================================================================== 00:18:51.276 Total : 18910.86 73.87 0.00 0.00 0.00 0.00 0.00 00:18:51.276 00:18:52.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.659 Nvme0n1 : 8.00 18922.88 73.92 0.00 0.00 0.00 0.00 0.00 00:18:52.659 =================================================================================================================== 00:18:52.659 Total : 18922.88 73.92 0.00 0.00 0.00 0.00 0.00 00:18:52.659 00:18:53.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.601 Nvme0n1 : 9.00 18939.44 73.98 0.00 0.00 0.00 0.00 0.00 00:18:53.601 =================================================================================================================== 00:18:53.601 Total : 18939.44 73.98 0.00 0.00 0.00 0.00 0.00 00:18:53.601 00:18:54.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.542 Nvme0n1 : 10.00 18952.70 74.03 0.00 0.00 0.00 0.00 0.00 00:18:54.542 =================================================================================================================== 00:18:54.542 Total : 18952.70 74.03 0.00 0.00 0.00 0.00 0.00 00:18:54.542 00:18:54.542 00:18:54.542 Latency(us) 00:18:54.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.542 Nvme0n1 : 10.01 18954.64 74.04 0.00 0.00 6749.36 4232.53 13762.56 00:18:54.542 =================================================================================================================== 00:18:54.542 Total : 18954.64 74.04 0.00 0.00 6749.36 4232.53 13762.56 00:18:54.542 0 00:18:54.542 23:15:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2814688 00:18:54.542 23:15:16 -- common/autotest_common.sh@926 -- # '[' -z 2814688 ']' 00:18:54.542 23:15:16 -- common/autotest_common.sh@930 -- # kill -0 2814688 00:18:54.542 23:15:16 -- common/autotest_common.sh@931 -- # uname 00:18:54.542 23:15:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.542 23:15:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2814688 00:18:54.542 23:15:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:54.542 23:15:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:54.542 23:15:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2814688' 00:18:54.542 killing process with pid 2814688 00:18:54.542 23:15:17 -- common/autotest_common.sh@945 -- # kill 2814688 00:18:54.542 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.542 00:18:54.542 Latency(us) 00:18:54.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.542 =================================================================================================================== 00:18:54.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.542 23:15:17 -- common/autotest_common.sh@950 -- # wait 2814688 00:18:54.542 23:15:17 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2811020 00:18:54.803 23:15:17 -- target/nvmf_lvs_grow.sh@74 -- # wait 2811020 00:18:55.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2811020 Killed "${NVMF_APP[@]}" "$@" 00:18:55.064 23:15:17 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:55.064 23:15:17 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:55.064 23:15:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:55.064 23:15:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:55.064 23:15:17 -- common/autotest_common.sh@10 -- # set +x 00:18:55.064 23:15:17 -- nvmf/common.sh@469 -- # nvmfpid=2817449 00:18:55.064 23:15:17 -- nvmf/common.sh@470 -- # waitforlisten 2817449 00:18:55.064 23:15:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:55.064 23:15:17 -- common/autotest_common.sh@819 -- # '[' -z 2817449 ']' 00:18:55.064 23:15:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.064 23:15:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.064 23:15:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.064 23:15:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.064 23:15:17 -- common/autotest_common.sh@10 -- # set +x 00:18:55.064 [2024-06-07 23:15:17.568745] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:55.064 [2024-06-07 23:15:17.568795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.064 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.064 [2024-06-07 23:15:17.635749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.064 [2024-06-07 23:15:17.664482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:55.064 [2024-06-07 23:15:17.664605] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.064 [2024-06-07 23:15:17.664614] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.064 [2024-06-07 23:15:17.664621] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.064 [2024-06-07 23:15:17.664642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.005 23:15:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.005 23:15:18 -- common/autotest_common.sh@852 -- # return 0 00:18:56.005 23:15:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:56.005 23:15:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:56.005 23:15:18 -- common/autotest_common.sh@10 -- # set +x 00:18:56.005 23:15:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.005 23:15:18 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:56.005 [2024-06-07 23:15:18.567415] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:56.005 [2024-06-07 23:15:18.567502] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:56.005 [2024-06-07 23:15:18.567530] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:56.005 23:15:18 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:56.005 23:15:18 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:56.005 23:15:18 -- common/autotest_common.sh@887 -- # local bdev_name=17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:56.005 23:15:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:56.005 23:15:18 -- common/autotest_common.sh@889 -- # local i 00:18:56.005 23:15:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:56.005 23:15:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:56.005 23:15:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:56.265 23:15:18 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17920659-d9fe-4c2d-ad67-e465eff7b911 -t 2000 00:18:56.265 [ 00:18:56.265 { 00:18:56.265 "name": "17920659-d9fe-4c2d-ad67-e465eff7b911", 00:18:56.265 "aliases": [ 00:18:56.265 "lvs/lvol" 00:18:56.265 ], 00:18:56.265 "product_name": "Logical Volume", 00:18:56.265 "block_size": 4096, 00:18:56.265 "num_blocks": 38912, 00:18:56.265 "uuid": "17920659-d9fe-4c2d-ad67-e465eff7b911", 00:18:56.265 "assigned_rate_limits": { 00:18:56.265 "rw_ios_per_sec": 0, 00:18:56.265 "rw_mbytes_per_sec": 0, 00:18:56.265 "r_mbytes_per_sec": 0, 00:18:56.265 "w_mbytes_per_sec": 0 00:18:56.265 }, 00:18:56.265 "claimed": false, 00:18:56.265 "zoned": false, 00:18:56.265 "supported_io_types": { 00:18:56.265 "read": true, 00:18:56.265 "write": true, 00:18:56.265 "unmap": true, 00:18:56.265 "write_zeroes": true, 00:18:56.265 "flush": false, 00:18:56.265 "reset": true, 00:18:56.265 "compare": false, 00:18:56.265 "compare_and_write": false, 00:18:56.265 "abort": false, 00:18:56.265 "nvme_admin": false, 00:18:56.265 "nvme_io": false 00:18:56.265 }, 00:18:56.265 "driver_specific": { 00:18:56.265 "lvol": { 00:18:56.265 "lvol_store_uuid": "d1805c9a-803c-47b4-acfd-e418478404fd", 00:18:56.265 "base_bdev": "aio_bdev", 00:18:56.265 "thin_provision": false, 00:18:56.265 "snapshot": false, 00:18:56.265 "clone": false, 00:18:56.265 "esnap_clone": false 00:18:56.265 } 00:18:56.265 } 00:18:56.265 } 00:18:56.265 ] 00:18:56.265 23:15:18 -- common/autotest_common.sh@895 -- # return 0 00:18:56.265 23:15:18 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:56.265 23:15:18 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:56.526 23:15:19 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:56.526 23:15:19 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:56.526 23:15:19 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:56.526 23:15:19 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:56.526 23:15:19 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:56.786 [2024-06-07 23:15:19.299311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:56.786 23:15:19 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:56.786 23:15:19 -- common/autotest_common.sh@640 -- # local es=0 00:18:56.786 23:15:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:56.786 23:15:19 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.786 23:15:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.786 23:15:19 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.786 23:15:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.786 23:15:19 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.786 23:15:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:56.786 23:15:19 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.786 23:15:19 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:56.786 23:15:19 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:57.047 request: 00:18:57.047 { 00:18:57.047 "uuid": "d1805c9a-803c-47b4-acfd-e418478404fd", 00:18:57.047 "method": "bdev_lvol_get_lvstores", 00:18:57.047 "req_id": 1 00:18:57.047 } 00:18:57.047 Got JSON-RPC error response 00:18:57.047 response: 00:18:57.047 { 00:18:57.047 "code": -19, 00:18:57.047 "message": "No such device" 00:18:57.047 } 00:18:57.047 23:15:19 -- common/autotest_common.sh@643 -- # es=1 00:18:57.047 23:15:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:57.047 23:15:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:57.047 23:15:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:57.047 23:15:19 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:57.047 aio_bdev 00:18:57.047 23:15:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:57.047 23:15:19 -- common/autotest_common.sh@887 -- # local bdev_name=17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:57.047 23:15:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:57.047 23:15:19 -- common/autotest_common.sh@889 -- # local i 00:18:57.047 23:15:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:57.048 23:15:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:57.048 23:15:19 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:57.307 23:15:19 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17920659-d9fe-4c2d-ad67-e465eff7b911 -t 2000 00:18:57.307 [ 00:18:57.307 { 00:18:57.307 "name": "17920659-d9fe-4c2d-ad67-e465eff7b911", 00:18:57.307 "aliases": [ 00:18:57.307 "lvs/lvol" 00:18:57.307 ], 00:18:57.307 "product_name": "Logical Volume", 00:18:57.307 "block_size": 4096, 00:18:57.307 "num_blocks": 38912, 00:18:57.307 "uuid": "17920659-d9fe-4c2d-ad67-e465eff7b911", 00:18:57.307 "assigned_rate_limits": { 00:18:57.307 "rw_ios_per_sec": 0, 00:18:57.307 "rw_mbytes_per_sec": 0, 00:18:57.307 "r_mbytes_per_sec": 0, 00:18:57.307 "w_mbytes_per_sec": 0 00:18:57.307 }, 00:18:57.307 "claimed": false, 00:18:57.307 "zoned": false, 00:18:57.307 "supported_io_types": { 00:18:57.307 "read": true, 00:18:57.307 "write": true, 00:18:57.307 "unmap": true, 00:18:57.307 "write_zeroes": true, 00:18:57.307 "flush": false, 00:18:57.307 "reset": true, 00:18:57.307 "compare": false, 00:18:57.307 "compare_and_write": false, 00:18:57.307 "abort": false, 00:18:57.307 "nvme_admin": false, 00:18:57.307 "nvme_io": false 00:18:57.307 }, 00:18:57.308 "driver_specific": { 00:18:57.308 "lvol": { 00:18:57.308 "lvol_store_uuid": "d1805c9a-803c-47b4-acfd-e418478404fd", 00:18:57.308 "base_bdev": "aio_bdev", 00:18:57.308 "thin_provision": false, 00:18:57.308 "snapshot": false, 00:18:57.308 "clone": false, 00:18:57.308 "esnap_clone": false 00:18:57.308 } 00:18:57.308 } 00:18:57.308 } 00:18:57.308 ] 00:18:57.308 23:15:19 -- common/autotest_common.sh@895 -- # return 0 00:18:57.308 23:15:19 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:57.308 23:15:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:57.568 23:15:20 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:57.568 23:15:20 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:57.568 23:15:20 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:57.568 23:15:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:57.568 23:15:20 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17920659-d9fe-4c2d-ad67-e465eff7b911 00:18:57.829 23:15:20 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1805c9a-803c-47b4-acfd-e418478404fd 00:18:58.090 23:15:20 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:58.091 23:15:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:58.091 00:18:58.091 real 0m16.546s 00:18:58.091 user 0m43.446s 00:18:58.091 sys 0m2.701s 00:18:58.091 23:15:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.091 23:15:20 -- common/autotest_common.sh@10 -- # set +x 00:18:58.091 ************************************ 00:18:58.091 END TEST lvs_grow_dirty 00:18:58.091 ************************************ 00:18:58.091 23:15:20 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:58.091 23:15:20 -- common/autotest_common.sh@796 -- # type=--id 00:18:58.091 23:15:20 -- common/autotest_common.sh@797 -- # id=0 00:18:58.091 23:15:20 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:58.091 23:15:20 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:58.091 23:15:20 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:58.091 23:15:20 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:58.091 23:15:20 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:58.091 23:15:20 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:58.091 nvmf_trace.0 00:18:58.352 23:15:20 -- common/autotest_common.sh@811 -- # return 0 00:18:58.352 23:15:20 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:58.352 23:15:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:58.352 23:15:20 -- nvmf/common.sh@116 -- # sync 00:18:58.352 23:15:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:58.352 23:15:20 -- nvmf/common.sh@119 -- # set +e 00:18:58.352 23:15:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:58.352 23:15:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:58.352 rmmod nvme_tcp 00:18:58.352 rmmod nvme_fabrics 00:18:58.352 rmmod nvme_keyring 00:18:58.352 23:15:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:58.352 23:15:20 -- nvmf/common.sh@123 -- # set -e 00:18:58.352 23:15:20 -- nvmf/common.sh@124 -- # return 0 00:18:58.352 23:15:20 -- nvmf/common.sh@477 -- # '[' -n 2817449 ']' 00:18:58.352 23:15:20 -- nvmf/common.sh@478 -- # killprocess 2817449 00:18:58.352 23:15:20 -- common/autotest_common.sh@926 -- # '[' -z 2817449 ']' 00:18:58.352 23:15:20 -- common/autotest_common.sh@930 -- # kill -0 2817449 00:18:58.352 23:15:20 -- common/autotest_common.sh@931 -- # uname 00:18:58.352 23:15:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:58.352 23:15:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2817449 00:18:58.352 23:15:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:58.352 23:15:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:58.352 23:15:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2817449' 00:18:58.352 killing process with pid 2817449 00:18:58.352 23:15:20 -- common/autotest_common.sh@945 -- # kill 2817449 00:18:58.352 23:15:20 -- common/autotest_common.sh@950 -- # wait 2817449 00:18:58.352 23:15:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:58.352 23:15:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:58.352 23:15:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:58.352 23:15:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.352 23:15:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:58.352 23:15:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.352 23:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.352 23:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.900 23:15:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:00.900 00:19:00.900 real 0m42.474s 00:19:00.900 user 1m4.012s 00:19:00.900 sys 0m9.687s 00:19:00.900 23:15:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.900 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:00.900 ************************************ 00:19:00.900 END TEST nvmf_lvs_grow 00:19:00.900 ************************************ 00:19:00.900 23:15:23 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:00.900 23:15:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:00.900 23:15:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:00.900 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:00.900 ************************************ 00:19:00.900 START TEST nvmf_bdev_io_wait 00:19:00.900 ************************************ 00:19:00.900 23:15:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:00.900 * Looking for test storage... 00:19:00.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.900 23:15:23 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.900 23:15:23 -- nvmf/common.sh@7 -- # uname -s 00:19:00.900 23:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.900 23:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.900 23:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.900 23:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.900 23:15:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.900 23:15:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.900 23:15:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.900 23:15:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.900 23:15:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.900 23:15:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.900 23:15:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.900 23:15:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.900 23:15:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.900 23:15:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.900 23:15:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.900 23:15:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.900 23:15:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.900 23:15:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.900 23:15:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.900 23:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.900 23:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.900 23:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.900 23:15:23 -- paths/export.sh@5 -- # export PATH 00:19:00.900 23:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.900 23:15:23 -- nvmf/common.sh@46 -- # : 0 00:19:00.900 23:15:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.900 23:15:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.900 23:15:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.900 23:15:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.900 23:15:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.900 23:15:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.900 23:15:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.900 23:15:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.900 23:15:23 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.900 23:15:23 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.900 23:15:23 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:00.900 23:15:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:00.900 23:15:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.900 23:15:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.900 23:15:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.900 23:15:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.900 23:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.900 23:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.900 23:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.900 23:15:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:00.900 23:15:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:00.900 23:15:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:00.900 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.040 23:15:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:09.040 23:15:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:09.040 23:15:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:09.040 23:15:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:09.040 23:15:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:09.040 23:15:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:09.040 23:15:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:09.041 23:15:30 -- nvmf/common.sh@294 -- # net_devs=() 00:19:09.041 23:15:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:09.041 23:15:30 -- nvmf/common.sh@295 -- # e810=() 00:19:09.041 23:15:30 -- nvmf/common.sh@295 -- # local -ga e810 00:19:09.041 23:15:30 -- nvmf/common.sh@296 -- # x722=() 00:19:09.041 23:15:30 -- nvmf/common.sh@296 -- # local -ga x722 00:19:09.041 23:15:30 -- nvmf/common.sh@297 -- # mlx=() 00:19:09.041 23:15:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:09.041 23:15:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.041 23:15:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.041 23:15:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:09.041 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:09.041 23:15:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.041 23:15:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:09.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:09.041 23:15:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.041 23:15:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.041 23:15:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.041 23:15:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:09.041 Found net devices under 0000:31:00.0: cvl_0_0 00:19:09.041 23:15:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.041 23:15:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.041 23:15:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.041 23:15:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:09.041 Found net devices under 0000:31:00.1: cvl_0_1 00:19:09.041 23:15:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:09.041 23:15:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:09.041 23:15:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.041 23:15:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.041 23:15:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:09.041 23:15:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.041 23:15:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.041 23:15:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:09.041 23:15:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.041 23:15:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.041 23:15:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:09.041 23:15:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:09.041 23:15:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.041 23:15:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.041 23:15:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.041 23:15:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.041 23:15:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:09.041 23:15:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.041 23:15:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.041 23:15:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.041 23:15:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:09.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:19:09.041 00:19:09.041 --- 10.0.0.2 ping statistics --- 00:19:09.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.041 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:19:09.041 23:15:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:19:09.041 00:19:09.041 --- 10.0.0.1 ping statistics --- 00:19:09.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.041 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:19:09.041 23:15:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.041 23:15:30 -- nvmf/common.sh@410 -- # return 0 00:19:09.041 23:15:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:09.041 23:15:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.041 23:15:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:09.041 23:15:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.041 23:15:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:09.041 23:15:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:09.041 23:15:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:09.041 23:15:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.041 23:15:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:09.041 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:09.041 23:15:30 -- nvmf/common.sh@469 -- # nvmfpid=2822586 00:19:09.041 23:15:30 -- nvmf/common.sh@470 -- # waitforlisten 2822586 00:19:09.041 23:15:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:09.041 23:15:30 -- common/autotest_common.sh@819 -- # '[' -z 2822586 ']' 00:19:09.041 23:15:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.041 23:15:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.041 23:15:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.041 23:15:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.041 23:15:30 -- common/autotest_common.sh@10 -- # set +x 00:19:09.041 [2024-06-07 23:15:30.750190] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.041 [2024-06-07 23:15:30.750263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.041 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.041 [2024-06-07 23:15:30.822884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.041 [2024-06-07 23:15:30.862311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.041 [2024-06-07 23:15:30.862456] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.041 [2024-06-07 23:15:30.862466] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.041 [2024-06-07 23:15:30.862474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.041 [2024-06-07 23:15:30.862651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.042 [2024-06-07 23:15:30.862772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.042 [2024-06-07 23:15:30.862907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.042 [2024-06-07 23:15:30.862908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.042 23:15:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:09.042 23:15:31 -- common/autotest_common.sh@852 -- # return 0 00:19:09.042 23:15:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:09.042 23:15:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 23:15:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 [2024-06-07 23:15:31.631161] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 Malloc0 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.042 23:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.042 23:15:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.042 [2024-06-07 23:15:31.704517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.042 23:15:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2822638 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=2822641 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # config=() 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.042 23:15:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.042 { 00:19:09.042 "params": { 00:19:09.042 "name": "Nvme$subsystem", 00:19:09.042 "trtype": "$TEST_TRANSPORT", 00:19:09.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.042 "adrfam": "ipv4", 00:19:09.042 "trsvcid": "$NVMF_PORT", 00:19:09.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.042 "hdgst": ${hdgst:-false}, 00:19:09.042 "ddgst": ${ddgst:-false} 00:19:09.042 }, 00:19:09.042 "method": "bdev_nvme_attach_controller" 00:19:09.042 } 00:19:09.042 EOF 00:19:09.042 )") 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2822643 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # config=() 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.042 23:15:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2822647 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.042 { 00:19:09.042 "params": { 00:19:09.042 "name": "Nvme$subsystem", 00:19:09.042 "trtype": "$TEST_TRANSPORT", 00:19:09.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.042 "adrfam": "ipv4", 00:19:09.042 "trsvcid": "$NVMF_PORT", 00:19:09.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.042 "hdgst": ${hdgst:-false}, 00:19:09.042 "ddgst": ${ddgst:-false} 00:19:09.042 }, 00:19:09.042 "method": "bdev_nvme_attach_controller" 00:19:09.042 } 00:19:09.042 EOF 00:19:09.042 )") 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@35 -- # sync 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # cat 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # config=() 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.042 23:15:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.042 { 00:19:09.042 "params": { 00:19:09.042 "name": "Nvme$subsystem", 00:19:09.042 "trtype": "$TEST_TRANSPORT", 00:19:09.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.042 "adrfam": "ipv4", 00:19:09.042 "trsvcid": "$NVMF_PORT", 00:19:09.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.042 "hdgst": ${hdgst:-false}, 00:19:09.042 "ddgst": ${ddgst:-false} 00:19:09.042 }, 00:19:09.042 "method": "bdev_nvme_attach_controller" 00:19:09.042 } 00:19:09.042 EOF 00:19:09.042 )") 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # config=() 00:19:09.042 23:15:31 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # cat 00:19:09.042 23:15:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.042 { 00:19:09.042 "params": { 00:19:09.042 "name": "Nvme$subsystem", 00:19:09.042 "trtype": "$TEST_TRANSPORT", 00:19:09.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.042 "adrfam": "ipv4", 00:19:09.042 "trsvcid": "$NVMF_PORT", 00:19:09.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.042 "hdgst": ${hdgst:-false}, 00:19:09.042 "ddgst": ${ddgst:-false} 00:19:09.042 }, 00:19:09.042 "method": "bdev_nvme_attach_controller" 00:19:09.042 } 00:19:09.042 EOF 00:19:09.042 )") 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # cat 00:19:09.042 23:15:31 -- target/bdev_io_wait.sh@37 -- # wait 2822638 00:19:09.042 23:15:31 -- nvmf/common.sh@542 -- # cat 00:19:09.303 23:15:31 -- nvmf/common.sh@544 -- # jq . 00:19:09.303 23:15:31 -- nvmf/common.sh@544 -- # jq . 00:19:09.303 23:15:31 -- nvmf/common.sh@544 -- # jq . 00:19:09.303 23:15:31 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.303 23:15:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.303 "params": { 00:19:09.303 "name": "Nvme1", 00:19:09.303 "trtype": "tcp", 00:19:09.303 "traddr": "10.0.0.2", 00:19:09.303 "adrfam": "ipv4", 00:19:09.303 "trsvcid": "4420", 00:19:09.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.303 "hdgst": false, 00:19:09.303 "ddgst": false 00:19:09.303 }, 00:19:09.303 "method": "bdev_nvme_attach_controller" 00:19:09.303 }' 00:19:09.303 23:15:31 -- nvmf/common.sh@544 -- # jq . 00:19:09.303 23:15:31 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.303 23:15:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.303 "params": { 00:19:09.303 "name": "Nvme1", 00:19:09.303 "trtype": "tcp", 00:19:09.303 "traddr": "10.0.0.2", 00:19:09.303 "adrfam": "ipv4", 00:19:09.303 "trsvcid": "4420", 00:19:09.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.303 "hdgst": false, 00:19:09.303 "ddgst": false 00:19:09.303 }, 00:19:09.303 "method": "bdev_nvme_attach_controller" 00:19:09.303 }' 00:19:09.303 23:15:31 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.303 23:15:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.303 "params": { 00:19:09.303 "name": "Nvme1", 00:19:09.303 "trtype": "tcp", 00:19:09.303 "traddr": "10.0.0.2", 00:19:09.303 "adrfam": "ipv4", 00:19:09.303 "trsvcid": "4420", 00:19:09.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.304 "hdgst": false, 00:19:09.304 "ddgst": false 00:19:09.304 }, 00:19:09.304 "method": "bdev_nvme_attach_controller" 00:19:09.304 }' 00:19:09.304 23:15:31 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.304 23:15:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.304 "params": { 00:19:09.304 "name": "Nvme1", 00:19:09.304 "trtype": "tcp", 00:19:09.304 "traddr": "10.0.0.2", 00:19:09.304 "adrfam": "ipv4", 00:19:09.304 "trsvcid": "4420", 00:19:09.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.304 "hdgst": false, 00:19:09.304 "ddgst": false 00:19:09.304 }, 00:19:09.304 "method": "bdev_nvme_attach_controller" 00:19:09.304 }' 00:19:09.304 [2024-06-07 23:15:31.753907] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.304 [2024-06-07 23:15:31.753959] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:09.304 [2024-06-07 23:15:31.759799] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.304 [2024-06-07 23:15:31.759847] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:09.304 [2024-06-07 23:15:31.760266] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.304 [2024-06-07 23:15:31.760313] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:09.304 [2024-06-07 23:15:31.761574] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.304 [2024-06-07 23:15:31.761619] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.304 [2024-06-07 23:15:31.897255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.304 [2024-06-07 23:15:31.915425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.304 [2024-06-07 23:15:31.937998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.304 [2024-06-07 23:15:31.953169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:09.564 [2024-06-07 23:15:31.987320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.564 [2024-06-07 23:15:32.003709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:09.564 [2024-06-07 23:15:32.044305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.564 [2024-06-07 23:15:32.062364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:09.564 Running I/O for 1 seconds... 00:19:09.824 Running I/O for 1 seconds... 00:19:09.824 Running I/O for 1 seconds... 00:19:09.824 Running I/O for 1 seconds... 00:19:10.765 00:19:10.765 Latency(us) 00:19:10.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.765 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:10.765 Nvme1n1 : 1.01 12299.01 48.04 0.00 0.00 10349.88 4778.67 15837.87 00:19:10.765 =================================================================================================================== 00:19:10.765 Total : 12299.01 48.04 0.00 0.00 10349.88 4778.67 15837.87 00:19:10.765 00:19:10.765 Latency(us) 00:19:10.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.765 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:10.765 Nvme1n1 : 1.00 12342.70 48.21 0.00 0.00 10352.01 2949.12 24248.32 00:19:10.765 =================================================================================================================== 00:19:10.765 Total : 12342.70 48.21 0.00 0.00 10352.01 2949.12 24248.32 00:19:10.765 00:19:10.765 Latency(us) 00:19:10.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.765 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:10.765 Nvme1n1 : 1.00 188770.84 737.39 0.00 0.00 675.42 269.65 757.76 00:19:10.765 =================================================================================================================== 00:19:10.765 Total : 188770.84 737.39 0.00 0.00 675.42 269.65 757.76 00:19:10.765 00:19:10.765 Latency(us) 00:19:10.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.765 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:10.765 Nvme1n1 : 1.01 13852.91 54.11 0.00 0.00 9209.31 5789.01 18786.99 00:19:10.765 =================================================================================================================== 00:19:10.765 Total : 13852.91 54.11 0.00 0.00 9209.31 5789.01 18786.99 00:19:10.765 23:15:33 -- target/bdev_io_wait.sh@38 -- # wait 2822641 00:19:11.025 23:15:33 -- target/bdev_io_wait.sh@39 -- # wait 2822643 00:19:11.025 23:15:33 -- target/bdev_io_wait.sh@40 -- # wait 2822647 00:19:11.025 23:15:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.025 23:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:11.025 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 23:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:11.025 23:15:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:11.025 23:15:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:11.025 23:15:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:11.025 23:15:33 -- nvmf/common.sh@116 -- # sync 00:19:11.025 23:15:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:11.025 23:15:33 -- nvmf/common.sh@119 -- # set +e 00:19:11.025 23:15:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:11.025 23:15:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:11.025 rmmod nvme_tcp 00:19:11.025 rmmod nvme_fabrics 00:19:11.025 rmmod nvme_keyring 00:19:11.025 23:15:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:11.025 23:15:33 -- nvmf/common.sh@123 -- # set -e 00:19:11.025 23:15:33 -- nvmf/common.sh@124 -- # return 0 00:19:11.025 23:15:33 -- nvmf/common.sh@477 -- # '[' -n 2822586 ']' 00:19:11.025 23:15:33 -- nvmf/common.sh@478 -- # killprocess 2822586 00:19:11.025 23:15:33 -- common/autotest_common.sh@926 -- # '[' -z 2822586 ']' 00:19:11.025 23:15:33 -- common/autotest_common.sh@930 -- # kill -0 2822586 00:19:11.025 23:15:33 -- common/autotest_common.sh@931 -- # uname 00:19:11.025 23:15:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:11.025 23:15:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2822586 00:19:11.025 23:15:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:11.025 23:15:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:11.025 23:15:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2822586' 00:19:11.025 killing process with pid 2822586 00:19:11.025 23:15:33 -- common/autotest_common.sh@945 -- # kill 2822586 00:19:11.025 23:15:33 -- common/autotest_common.sh@950 -- # wait 2822586 00:19:11.285 23:15:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:11.285 23:15:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:11.285 23:15:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:11.285 23:15:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.285 23:15:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:11.285 23:15:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.285 23:15:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.285 23:15:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.289 23:15:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:13.289 00:19:13.289 real 0m12.678s 00:19:13.289 user 0m18.899s 00:19:13.289 sys 0m6.905s 00:19:13.289 23:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.289 23:15:35 -- common/autotest_common.sh@10 -- # set +x 00:19:13.289 ************************************ 00:19:13.289 END TEST nvmf_bdev_io_wait 00:19:13.289 ************************************ 00:19:13.289 23:15:35 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:13.289 23:15:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:13.289 23:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:13.289 23:15:35 -- common/autotest_common.sh@10 -- # set +x 00:19:13.289 ************************************ 00:19:13.289 START TEST nvmf_queue_depth 00:19:13.289 ************************************ 00:19:13.289 23:15:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:13.289 * Looking for test storage... 00:19:13.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.289 23:15:35 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.289 23:15:35 -- nvmf/common.sh@7 -- # uname -s 00:19:13.289 23:15:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.289 23:15:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.289 23:15:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.289 23:15:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.289 23:15:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.289 23:15:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.289 23:15:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.289 23:15:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.289 23:15:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.289 23:15:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.289 23:15:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.289 23:15:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.290 23:15:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.551 23:15:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.551 23:15:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.551 23:15:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.551 23:15:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.551 23:15:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.551 23:15:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.551 23:15:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.551 23:15:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.551 23:15:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.551 23:15:35 -- paths/export.sh@5 -- # export PATH 00:19:13.551 23:15:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.551 23:15:35 -- nvmf/common.sh@46 -- # : 0 00:19:13.551 23:15:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:13.551 23:15:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:13.551 23:15:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:13.551 23:15:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.551 23:15:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.551 23:15:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:13.551 23:15:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:13.551 23:15:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:13.551 23:15:35 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:13.551 23:15:35 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:13.551 23:15:35 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.551 23:15:35 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:13.551 23:15:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:13.551 23:15:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.551 23:15:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:13.551 23:15:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:13.551 23:15:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:13.551 23:15:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.551 23:15:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.551 23:15:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.551 23:15:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:13.551 23:15:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:13.551 23:15:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:13.551 23:15:35 -- common/autotest_common.sh@10 -- # set +x 00:19:21.700 23:15:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.700 23:15:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:21.700 23:15:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:21.700 23:15:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:21.700 23:15:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:21.700 23:15:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:21.700 23:15:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:21.700 23:15:42 -- nvmf/common.sh@294 -- # net_devs=() 00:19:21.700 23:15:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:21.700 23:15:42 -- nvmf/common.sh@295 -- # e810=() 00:19:21.700 23:15:42 -- nvmf/common.sh@295 -- # local -ga e810 00:19:21.700 23:15:42 -- nvmf/common.sh@296 -- # x722=() 00:19:21.700 23:15:42 -- nvmf/common.sh@296 -- # local -ga x722 00:19:21.700 23:15:42 -- nvmf/common.sh@297 -- # mlx=() 00:19:21.700 23:15:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:21.700 23:15:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.700 23:15:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:21.700 23:15:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:21.700 23:15:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.700 23:15:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:21.700 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:21.700 23:15:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.700 23:15:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:21.700 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:21.700 23:15:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.700 23:15:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.700 23:15:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.700 23:15:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:21.700 Found net devices under 0000:31:00.0: cvl_0_0 00:19:21.700 23:15:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.700 23:15:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.700 23:15:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.700 23:15:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.700 23:15:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:21.700 Found net devices under 0000:31:00.1: cvl_0_1 00:19:21.700 23:15:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.700 23:15:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:21.700 23:15:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:21.700 23:15:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:21.700 23:15:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.700 23:15:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.700 23:15:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.700 23:15:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:21.700 23:15:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.700 23:15:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.700 23:15:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:21.700 23:15:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.700 23:15:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.700 23:15:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:21.700 23:15:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:21.700 23:15:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.700 23:15:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.700 23:15:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.700 23:15:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.700 23:15:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:21.700 23:15:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.700 23:15:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.700 23:15:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.700 23:15:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:21.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:19:21.700 00:19:21.700 --- 10.0.0.2 ping statistics --- 00:19:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.700 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:19:21.700 23:15:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:19:21.700 00:19:21.700 --- 10.0.0.1 ping statistics --- 00:19:21.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.700 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:21.700 23:15:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.700 23:15:43 -- nvmf/common.sh@410 -- # return 0 00:19:21.700 23:15:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:21.700 23:15:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.700 23:15:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:21.700 23:15:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:21.700 23:15:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.700 23:15:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:21.700 23:15:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:21.700 23:15:43 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:21.700 23:15:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:21.700 23:15:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:21.700 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:21.700 23:15:43 -- nvmf/common.sh@469 -- # nvmfpid=2827389 00:19:21.700 23:15:43 -- nvmf/common.sh@470 -- # waitforlisten 2827389 00:19:21.700 23:15:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:21.700 23:15:43 -- common/autotest_common.sh@819 -- # '[' -z 2827389 ']' 00:19:21.701 23:15:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.701 23:15:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:21.701 23:15:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.701 23:15:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:21.701 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 [2024-06-07 23:15:43.286829] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:21.701 [2024-06-07 23:15:43.286877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.701 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.701 [2024-06-07 23:15:43.369654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.701 [2024-06-07 23:15:43.401994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:21.701 [2024-06-07 23:15:43.402134] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.701 [2024-06-07 23:15:43.402143] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.701 [2024-06-07 23:15:43.402150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.701 [2024-06-07 23:15:43.402176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.701 23:15:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.701 23:15:44 -- common/autotest_common.sh@852 -- # return 0 00:19:21.701 23:15:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:21.701 23:15:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 23:15:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.701 23:15:44 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.701 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 [2024-06-07 23:15:44.103830] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.701 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.701 23:15:44 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.701 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 Malloc0 00:19:21.701 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.701 23:15:44 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.701 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.701 23:15:44 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.701 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.701 23:15:44 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.701 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 [2024-06-07 23:15:44.177010] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.701 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.701 23:15:44 -- target/queue_depth.sh@30 -- # bdevperf_pid=2827497 00:19:21.701 23:15:44 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:21.701 23:15:44 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:21.701 23:15:44 -- target/queue_depth.sh@33 -- # waitforlisten 2827497 /var/tmp/bdevperf.sock 00:19:21.701 23:15:44 -- common/autotest_common.sh@819 -- # '[' -z 2827497 ']' 00:19:21.701 23:15:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.701 23:15:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:21.701 23:15:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.701 23:15:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:21.701 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.701 [2024-06-07 23:15:44.230675] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:21.701 [2024-06-07 23:15:44.230735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827497 ] 00:19:21.701 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.701 [2024-06-07 23:15:44.295602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.701 [2024-06-07 23:15:44.332761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.962 23:15:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.962 23:15:44 -- common/autotest_common.sh@852 -- # return 0 00:19:21.962 23:15:44 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:21.962 23:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.962 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:19:21.962 NVMe0n1 00:19:21.962 23:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.962 23:15:44 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.222 Running I/O for 10 seconds... 00:19:32.226 00:19:32.226 Latency(us) 00:19:32.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:32.226 Verification LBA range: start 0x0 length 0x4000 00:19:32.226 NVMe0n1 : 10.04 18485.58 72.21 0.00 0.00 55233.83 10813.44 52647.25 00:19:32.226 =================================================================================================================== 00:19:32.226 Total : 18485.58 72.21 0.00 0.00 55233.83 10813.44 52647.25 00:19:32.226 0 00:19:32.226 23:15:54 -- target/queue_depth.sh@39 -- # killprocess 2827497 00:19:32.226 23:15:54 -- common/autotest_common.sh@926 -- # '[' -z 2827497 ']' 00:19:32.226 23:15:54 -- common/autotest_common.sh@930 -- # kill -0 2827497 00:19:32.226 23:15:54 -- common/autotest_common.sh@931 -- # uname 00:19:32.226 23:15:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.226 23:15:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2827497 00:19:32.226 23:15:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:32.226 23:15:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:32.226 23:15:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2827497' 00:19:32.226 killing process with pid 2827497 00:19:32.226 23:15:54 -- common/autotest_common.sh@945 -- # kill 2827497 00:19:32.226 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.226 00:19:32.226 Latency(us) 00:19:32.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.226 =================================================================================================================== 00:19:32.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.226 23:15:54 -- common/autotest_common.sh@950 -- # wait 2827497 00:19:32.487 23:15:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:32.487 23:15:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:32.487 23:15:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.487 23:15:54 -- nvmf/common.sh@116 -- # sync 00:19:32.487 23:15:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.487 23:15:54 -- nvmf/common.sh@119 -- # set +e 00:19:32.487 23:15:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.487 23:15:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.487 rmmod nvme_tcp 00:19:32.487 rmmod nvme_fabrics 00:19:32.487 rmmod nvme_keyring 00:19:32.487 23:15:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.487 23:15:54 -- nvmf/common.sh@123 -- # set -e 00:19:32.487 23:15:54 -- nvmf/common.sh@124 -- # return 0 00:19:32.487 23:15:54 -- nvmf/common.sh@477 -- # '[' -n 2827389 ']' 00:19:32.487 23:15:54 -- nvmf/common.sh@478 -- # killprocess 2827389 00:19:32.487 23:15:54 -- common/autotest_common.sh@926 -- # '[' -z 2827389 ']' 00:19:32.487 23:15:54 -- common/autotest_common.sh@930 -- # kill -0 2827389 00:19:32.487 23:15:54 -- common/autotest_common.sh@931 -- # uname 00:19:32.487 23:15:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.487 23:15:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2827389 00:19:32.487 23:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:32.487 23:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:32.487 23:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2827389' 00:19:32.487 killing process with pid 2827389 00:19:32.487 23:15:55 -- common/autotest_common.sh@945 -- # kill 2827389 00:19:32.487 23:15:55 -- common/autotest_common.sh@950 -- # wait 2827389 00:19:32.748 23:15:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:32.748 23:15:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:32.748 23:15:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:32.748 23:15:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.748 23:15:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:32.748 23:15:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.748 23:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.748 23:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.661 23:15:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:34.661 00:19:34.661 real 0m21.370s 00:19:34.661 user 0m24.198s 00:19:34.661 sys 0m6.536s 00:19:34.661 23:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.661 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:34.661 ************************************ 00:19:34.661 END TEST nvmf_queue_depth 00:19:34.661 ************************************ 00:19:34.661 23:15:57 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:34.661 23:15:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:34.661 23:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.661 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:34.661 ************************************ 00:19:34.661 START TEST nvmf_multipath 00:19:34.661 ************************************ 00:19:34.661 23:15:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:34.921 * Looking for test storage... 00:19:34.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.921 23:15:57 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.921 23:15:57 -- nvmf/common.sh@7 -- # uname -s 00:19:34.921 23:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.921 23:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.921 23:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.921 23:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.921 23:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.921 23:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.921 23:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.921 23:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.921 23:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.921 23:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.921 23:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.921 23:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.921 23:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.921 23:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.921 23:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.921 23:15:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.921 23:15:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.921 23:15:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.921 23:15:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.921 23:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.921 23:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.921 23:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.921 23:15:57 -- paths/export.sh@5 -- # export PATH 00:19:34.921 23:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.921 23:15:57 -- nvmf/common.sh@46 -- # : 0 00:19:34.921 23:15:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:34.921 23:15:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:34.921 23:15:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:34.922 23:15:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.922 23:15:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.922 23:15:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:34.922 23:15:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:34.922 23:15:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:34.922 23:15:57 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.922 23:15:57 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.922 23:15:57 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:34.922 23:15:57 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.922 23:15:57 -- target/multipath.sh@43 -- # nvmftestinit 00:19:34.922 23:15:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:34.922 23:15:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.922 23:15:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:34.922 23:15:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:34.922 23:15:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:34.922 23:15:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.922 23:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.922 23:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.922 23:15:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:34.922 23:15:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:34.922 23:15:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:34.922 23:15:57 -- common/autotest_common.sh@10 -- # set +x 00:19:41.504 23:16:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.504 23:16:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:41.504 23:16:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:41.504 23:16:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:41.504 23:16:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:41.504 23:16:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:41.504 23:16:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:41.504 23:16:04 -- nvmf/common.sh@294 -- # net_devs=() 00:19:41.504 23:16:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:41.504 23:16:04 -- nvmf/common.sh@295 -- # e810=() 00:19:41.504 23:16:04 -- nvmf/common.sh@295 -- # local -ga e810 00:19:41.504 23:16:04 -- nvmf/common.sh@296 -- # x722=() 00:19:41.504 23:16:04 -- nvmf/common.sh@296 -- # local -ga x722 00:19:41.504 23:16:04 -- nvmf/common.sh@297 -- # mlx=() 00:19:41.504 23:16:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:41.504 23:16:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.504 23:16:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:41.504 23:16:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:41.504 23:16:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:41.504 23:16:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:41.504 23:16:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:41.504 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:41.504 23:16:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:41.504 23:16:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:41.505 23:16:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:41.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:41.505 23:16:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:41.505 23:16:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:41.505 23:16:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.505 23:16:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:41.505 23:16:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.505 23:16:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:41.505 Found net devices under 0000:31:00.0: cvl_0_0 00:19:41.505 23:16:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.505 23:16:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:41.505 23:16:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.505 23:16:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:41.505 23:16:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.505 23:16:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:41.505 Found net devices under 0000:31:00.1: cvl_0_1 00:19:41.505 23:16:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.505 23:16:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:41.505 23:16:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:41.505 23:16:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:41.505 23:16:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:41.505 23:16:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.505 23:16:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.505 23:16:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.505 23:16:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:41.505 23:16:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.505 23:16:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.505 23:16:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:41.505 23:16:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.505 23:16:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.505 23:16:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:41.505 23:16:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:41.505 23:16:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.505 23:16:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.766 23:16:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.766 23:16:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.766 23:16:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:41.766 23:16:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.766 23:16:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.766 23:16:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.766 23:16:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:41.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:19:41.766 00:19:41.766 --- 10.0.0.2 ping statistics --- 00:19:41.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.766 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:19:41.766 23:16:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:19:41.766 00:19:41.766 --- 10.0.0.1 ping statistics --- 00:19:41.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.766 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:41.766 23:16:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.766 23:16:04 -- nvmf/common.sh@410 -- # return 0 00:19:41.766 23:16:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:41.766 23:16:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.766 23:16:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:41.766 23:16:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:41.766 23:16:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.766 23:16:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:41.766 23:16:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:41.766 23:16:04 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:41.766 23:16:04 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:41.766 only one NIC for nvmf test 00:19:41.766 23:16:04 -- target/multipath.sh@47 -- # nvmftestfini 00:19:41.766 23:16:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.766 23:16:04 -- nvmf/common.sh@116 -- # sync 00:19:41.766 23:16:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.766 23:16:04 -- nvmf/common.sh@119 -- # set +e 00:19:41.766 23:16:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.766 23:16:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.766 rmmod nvme_tcp 00:19:42.027 rmmod nvme_fabrics 00:19:42.027 rmmod nvme_keyring 00:19:42.027 23:16:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:42.027 23:16:04 -- nvmf/common.sh@123 -- # set -e 00:19:42.027 23:16:04 -- nvmf/common.sh@124 -- # return 0 00:19:42.027 23:16:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:42.027 23:16:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:42.027 23:16:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:42.027 23:16:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:42.027 23:16:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.027 23:16:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:42.027 23:16:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.027 23:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.027 23:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.939 23:16:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:43.939 23:16:06 -- target/multipath.sh@48 -- # exit 0 00:19:43.939 23:16:06 -- target/multipath.sh@1 -- # nvmftestfini 00:19:43.939 23:16:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:43.939 23:16:06 -- nvmf/common.sh@116 -- # sync 00:19:43.939 23:16:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:43.939 23:16:06 -- nvmf/common.sh@119 -- # set +e 00:19:43.939 23:16:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:43.939 23:16:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:43.939 23:16:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:43.939 23:16:06 -- nvmf/common.sh@123 -- # set -e 00:19:43.939 23:16:06 -- nvmf/common.sh@124 -- # return 0 00:19:43.939 23:16:06 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:43.939 23:16:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:43.939 23:16:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:43.939 23:16:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:43.939 23:16:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.939 23:16:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:43.939 23:16:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.939 23:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.939 23:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.939 23:16:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:43.939 00:19:43.939 real 0m9.316s 00:19:43.939 user 0m1.977s 00:19:43.939 sys 0m5.235s 00:19:43.939 23:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.939 23:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:43.939 ************************************ 00:19:43.939 END TEST nvmf_multipath 00:19:43.939 ************************************ 00:19:44.200 23:16:06 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:44.200 23:16:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:44.200 23:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.200 23:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:44.200 ************************************ 00:19:44.200 START TEST nvmf_zcopy 00:19:44.200 ************************************ 00:19:44.200 23:16:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:44.200 * Looking for test storage... 00:19:44.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.200 23:16:06 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.200 23:16:06 -- nvmf/common.sh@7 -- # uname -s 00:19:44.200 23:16:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.200 23:16:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.200 23:16:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.200 23:16:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.200 23:16:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.200 23:16:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.200 23:16:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.200 23:16:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.200 23:16:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.200 23:16:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.200 23:16:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.200 23:16:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.200 23:16:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.200 23:16:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.200 23:16:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.200 23:16:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.200 23:16:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.200 23:16:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.201 23:16:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.201 23:16:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.201 23:16:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.201 23:16:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.201 23:16:06 -- paths/export.sh@5 -- # export PATH 00:19:44.201 23:16:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.201 23:16:06 -- nvmf/common.sh@46 -- # : 0 00:19:44.201 23:16:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.201 23:16:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.201 23:16:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.201 23:16:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.201 23:16:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.201 23:16:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.201 23:16:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.201 23:16:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.201 23:16:06 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:44.201 23:16:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:44.201 23:16:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.201 23:16:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:44.201 23:16:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:44.201 23:16:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:44.201 23:16:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.201 23:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.201 23:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.201 23:16:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:44.201 23:16:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:44.201 23:16:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:44.201 23:16:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.785 23:16:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:50.785 23:16:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:50.785 23:16:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:50.785 23:16:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:50.785 23:16:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:50.785 23:16:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:50.785 23:16:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:50.785 23:16:13 -- nvmf/common.sh@294 -- # net_devs=() 00:19:50.785 23:16:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:50.785 23:16:13 -- nvmf/common.sh@295 -- # e810=() 00:19:50.785 23:16:13 -- nvmf/common.sh@295 -- # local -ga e810 00:19:50.785 23:16:13 -- nvmf/common.sh@296 -- # x722=() 00:19:50.785 23:16:13 -- nvmf/common.sh@296 -- # local -ga x722 00:19:50.785 23:16:13 -- nvmf/common.sh@297 -- # mlx=() 00:19:50.785 23:16:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:50.785 23:16:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.785 23:16:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:50.785 23:16:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:50.785 23:16:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:50.785 23:16:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.785 23:16:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:50.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:50.785 23:16:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.785 23:16:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:50.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:50.785 23:16:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:50.785 23:16:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:50.785 23:16:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.785 23:16:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.785 23:16:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.785 23:16:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.785 23:16:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:50.785 Found net devices under 0000:31:00.0: cvl_0_0 00:19:50.785 23:16:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.785 23:16:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.785 23:16:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.785 23:16:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.785 23:16:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.785 23:16:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:50.785 Found net devices under 0000:31:00.1: cvl_0_1 00:19:50.786 23:16:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.786 23:16:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:50.786 23:16:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:50.786 23:16:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:50.786 23:16:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:50.786 23:16:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:50.786 23:16:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.786 23:16:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.786 23:16:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.786 23:16:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:50.786 23:16:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.786 23:16:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.786 23:16:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:50.786 23:16:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.786 23:16:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.786 23:16:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:50.786 23:16:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:50.786 23:16:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.786 23:16:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.046 23:16:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.046 23:16:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.046 23:16:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:51.046 23:16:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.046 23:16:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.046 23:16:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.307 23:16:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:51.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.834 ms 00:19:51.307 00:19:51.307 --- 10.0.0.2 ping statistics --- 00:19:51.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.307 rtt min/avg/max/mdev = 0.834/0.834/0.834/0.000 ms 00:19:51.307 23:16:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:19:51.307 00:19:51.307 --- 10.0.0.1 ping statistics --- 00:19:51.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.307 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:19:51.307 23:16:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.307 23:16:13 -- nvmf/common.sh@410 -- # return 0 00:19:51.307 23:16:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:51.307 23:16:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.307 23:16:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:51.307 23:16:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:51.307 23:16:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.307 23:16:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:51.307 23:16:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:51.307 23:16:13 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:51.307 23:16:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:51.307 23:16:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:51.307 23:16:13 -- common/autotest_common.sh@10 -- # set +x 00:19:51.307 23:16:13 -- nvmf/common.sh@469 -- # nvmfpid=2837972 00:19:51.307 23:16:13 -- nvmf/common.sh@470 -- # waitforlisten 2837972 00:19:51.307 23:16:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.307 23:16:13 -- common/autotest_common.sh@819 -- # '[' -z 2837972 ']' 00:19:51.307 23:16:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.307 23:16:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.307 23:16:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.307 23:16:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.307 23:16:13 -- common/autotest_common.sh@10 -- # set +x 00:19:51.307 [2024-06-07 23:16:13.837385] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:51.307 [2024-06-07 23:16:13.837446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.307 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.307 [2024-06-07 23:16:13.927893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.307 [2024-06-07 23:16:13.973434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:51.307 [2024-06-07 23:16:13.973570] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.307 [2024-06-07 23:16:13.973579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.307 [2024-06-07 23:16:13.973587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.307 [2024-06-07 23:16:13.973608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.250 23:16:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:52.250 23:16:14 -- common/autotest_common.sh@852 -- # return 0 00:19:52.250 23:16:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:52.250 23:16:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 23:16:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.250 23:16:14 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:52.250 23:16:14 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 [2024-06-07 23:16:14.658212] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 [2024-06-07 23:16:14.682369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 malloc0 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.250 23:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.250 23:16:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 23:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.250 23:16:14 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:52.250 23:16:14 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:52.250 23:16:14 -- nvmf/common.sh@520 -- # config=() 00:19:52.250 23:16:14 -- nvmf/common.sh@520 -- # local subsystem config 00:19:52.250 23:16:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:52.250 23:16:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:52.250 { 00:19:52.250 "params": { 00:19:52.250 "name": "Nvme$subsystem", 00:19:52.250 "trtype": "$TEST_TRANSPORT", 00:19:52.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.250 "adrfam": "ipv4", 00:19:52.250 "trsvcid": "$NVMF_PORT", 00:19:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.250 "hdgst": ${hdgst:-false}, 00:19:52.250 "ddgst": ${ddgst:-false} 00:19:52.250 }, 00:19:52.250 "method": "bdev_nvme_attach_controller" 00:19:52.250 } 00:19:52.250 EOF 00:19:52.250 )") 00:19:52.250 23:16:14 -- nvmf/common.sh@542 -- # cat 00:19:52.250 23:16:14 -- nvmf/common.sh@544 -- # jq . 00:19:52.250 23:16:14 -- nvmf/common.sh@545 -- # IFS=, 00:19:52.250 23:16:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:52.250 "params": { 00:19:52.250 "name": "Nvme1", 00:19:52.250 "trtype": "tcp", 00:19:52.250 "traddr": "10.0.0.2", 00:19:52.250 "adrfam": "ipv4", 00:19:52.250 "trsvcid": "4420", 00:19:52.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.250 "hdgst": false, 00:19:52.250 "ddgst": false 00:19:52.250 }, 00:19:52.250 "method": "bdev_nvme_attach_controller" 00:19:52.250 }' 00:19:52.250 [2024-06-07 23:16:14.773056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:52.250 [2024-06-07 23:16:14.773102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838272 ] 00:19:52.250 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.250 [2024-06-07 23:16:14.831366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.250 [2024-06-07 23:16:14.860715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.510 Running I/O for 10 seconds... 00:20:02.509 00:20:02.509 Latency(us) 00:20:02.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:02.509 Verification LBA range: start 0x0 length 0x1000 00:20:02.509 Nvme1n1 : 10.01 13567.28 105.99 0.00 0.00 9406.19 1563.31 22391.47 00:20:02.509 =================================================================================================================== 00:20:02.509 Total : 13567.28 105.99 0.00 0.00 9406.19 1563.31 22391.47 00:20:02.509 23:16:25 -- target/zcopy.sh@39 -- # perfpid=2840303 00:20:02.509 23:16:25 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:02.509 23:16:25 -- common/autotest_common.sh@10 -- # set +x 00:20:02.509 23:16:25 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:02.509 23:16:25 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:02.509 23:16:25 -- nvmf/common.sh@520 -- # config=() 00:20:02.509 23:16:25 -- nvmf/common.sh@520 -- # local subsystem config 00:20:02.509 23:16:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:02.509 23:16:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:02.509 { 00:20:02.509 "params": { 00:20:02.509 "name": "Nvme$subsystem", 00:20:02.509 "trtype": "$TEST_TRANSPORT", 00:20:02.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.509 "adrfam": "ipv4", 00:20:02.509 "trsvcid": "$NVMF_PORT", 00:20:02.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.509 "hdgst": ${hdgst:-false}, 00:20:02.509 "ddgst": ${ddgst:-false} 00:20:02.509 }, 00:20:02.509 "method": "bdev_nvme_attach_controller" 00:20:02.509 } 00:20:02.509 EOF 00:20:02.509 )") 00:20:02.509 23:16:25 -- nvmf/common.sh@542 -- # cat 00:20:02.509 [2024-06-07 23:16:25.152412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.509 [2024-06-07 23:16:25.152439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.509 23:16:25 -- nvmf/common.sh@544 -- # jq . 00:20:02.509 23:16:25 -- nvmf/common.sh@545 -- # IFS=, 00:20:02.509 23:16:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:02.509 "params": { 00:20:02.509 "name": "Nvme1", 00:20:02.509 "trtype": "tcp", 00:20:02.509 "traddr": "10.0.0.2", 00:20:02.509 "adrfam": "ipv4", 00:20:02.509 "trsvcid": "4420", 00:20:02.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.509 "hdgst": false, 00:20:02.509 "ddgst": false 00:20:02.509 }, 00:20:02.509 "method": "bdev_nvme_attach_controller" 00:20:02.509 }' 00:20:02.509 [2024-06-07 23:16:25.164416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.509 [2024-06-07 23:16:25.164424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.509 [2024-06-07 23:16:25.176445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.509 [2024-06-07 23:16:25.176452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.509 [2024-06-07 23:16:25.188474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.509 [2024-06-07 23:16:25.188481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.770 [2024-06-07 23:16:25.192477] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:02.770 [2024-06-07 23:16:25.192522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840303 ] 00:20:02.771 [2024-06-07 23:16:25.200506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.200513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.212536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.212543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.771 [2024-06-07 23:16:25.224567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.224575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.236596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.236603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.248626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.248637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.251492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.771 [2024-06-07 23:16:25.260659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.260667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.272691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.272704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.280023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.771 [2024-06-07 23:16:25.284720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.284728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.296756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.296767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.308788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.308799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.320817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.320824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.332849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.332856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.344887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.344899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.356912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.356920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.368944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.368953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.380974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.380981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.393007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.393014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.405040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.405046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.417074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.417082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.429106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.429114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.771 [2024-06-07 23:16:25.441135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.771 [2024-06-07 23:16:25.441142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.453167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.453174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.465200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.465212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.477232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.477238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.489263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.489269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.501292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.501298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.513327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.513335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.525359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.525366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.537392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.537398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.549424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.549431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.561459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.561469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.573492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.573503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 Running I/O for 5 seconds... 00:20:03.032 [2024-06-07 23:16:25.589249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.589266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.602396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.602411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.615395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.615411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.628097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.628113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.640799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.640813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.653457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.653472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.666008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.666023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.679151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.679166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.692465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.692479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.032 [2024-06-07 23:16:25.705381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.032 [2024-06-07 23:16:25.705399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.718716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.718730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.731804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.731818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.744691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.744705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.757302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.757316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.769969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.769983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.782812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.782827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.796187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.796202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.809418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.809433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.822126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.822141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.834904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.834919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.847758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.847772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.860872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.860887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.873585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.873599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.881450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.881465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.890010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.890024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.902834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.902849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.915343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.915358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.928586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.928600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.941775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.941790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.954403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.954418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.294 [2024-06-07 23:16:25.967451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.294 [2024-06-07 23:16:25.967465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:25.980274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:25.980289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:25.992402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:25.992417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.005422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.005436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.018415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.018430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.031266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.031280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.044505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.044520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.057776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.057791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.065574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.065589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.074292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.074306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.087484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.087499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.100707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.100722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.113207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.113222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.126364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.126379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.139464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.139479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.152343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.152358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.165558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.165572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.178039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.178054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.190950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.190965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.203868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.203882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.216498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.216512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.556 [2024-06-07 23:16:26.228705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.556 [2024-06-07 23:16:26.228719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.817 [2024-06-07 23:16:26.237196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.817 [2024-06-07 23:16:26.237211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.817 [2024-06-07 23:16:26.250033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.250048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.263409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.263423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.276301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.276316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.289250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.289265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.302259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.302273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.315513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.315528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.328218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.328233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.340847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.340862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.349562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.349576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.362112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.362127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.374828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.374842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.386911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.386926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.399951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.399965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.412985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.412999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.426064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.426078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.439452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.439467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.452502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.452516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.460161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.460175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.468697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.468711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.477471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.477484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.485846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.485860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.818 [2024-06-07 23:16:26.494233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.818 [2024-06-07 23:16:26.494251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.502667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.502681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.511000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.511014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.519880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.519894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.528450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.528464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.536852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.536865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.545146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.545160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.554168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.554182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.563130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.563144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.571822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.571836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.580714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.580728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.589503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.589516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.598342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.598355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.606824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.606838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.614810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.614824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.623612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.623626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.632318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.632331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.640745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.640758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.649488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.649501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.658358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.658371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.667192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.667205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.675992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.676006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.684635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.684649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.693537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.693551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.701690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.701703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.710383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.710397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.719209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.719223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.727880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.727893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.736758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.736772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.745446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.745467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.080 [2024-06-07 23:16:26.754082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.080 [2024-06-07 23:16:26.754095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.341 [2024-06-07 23:16:26.762696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.341 [2024-06-07 23:16:26.762710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.341 [2024-06-07 23:16:26.771270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.341 [2024-06-07 23:16:26.771284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.779589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.779602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.788052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.788065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.796871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.796885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.805249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.805262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.813850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.813864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.822407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.822421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.831266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.831280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.840486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.840500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.848821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.848835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.857493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.857507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.865950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.865964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.874129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.874142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.881838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.881851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.890791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.890805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.899071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.899084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.907494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.907512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.916299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.916313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.924847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.924861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.933491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.933505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.942401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.942415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.951032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.951046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.959741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.959755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.968546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.968560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.976854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.976868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.985578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.985592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:26.993741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:26.993755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:27.002239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:27.002257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:27.011093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:27.011107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.342 [2024-06-07 23:16:27.019727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.342 [2024-06-07 23:16:27.019741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.028542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.028556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.037216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.037230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.046038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.046052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.054864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.054878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.063324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.063338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.071619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.071636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.080471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.080485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.089094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.089107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.097975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.097988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.106719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.106733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.115466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.115480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.124319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.124333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.132993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.133007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.141230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.141250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.150146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.150160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.158682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.158695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.167288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.167302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.175866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.175880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.184726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.184740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.193130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.193143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.201224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.201238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.209741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.209754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.218663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.218677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.227576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.604 [2024-06-07 23:16:27.227590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.604 [2024-06-07 23:16:27.236413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.236430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.605 [2024-06-07 23:16:27.245094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.245107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.605 [2024-06-07 23:16:27.253071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.253085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.605 [2024-06-07 23:16:27.261819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.261834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.605 [2024-06-07 23:16:27.269984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.269999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.605 [2024-06-07 23:16:27.279062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.605 [2024-06-07 23:16:27.279076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.287662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.287676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.296286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.296300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.304422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.304435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.312950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.312964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.321366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.321380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.329413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.329427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.337964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.337977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.346848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.346862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.355831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.355845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.364621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.364635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.372715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.372728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.381387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.381401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.390116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.390130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.398427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.398441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.407261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.407275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.415406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.415420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.423930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.423944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.432393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.432407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.440820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.440834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.449311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.449325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.458322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.458336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.466174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.466189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.475230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.475250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.483627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.483641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.492478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.492491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.501373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.501387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.509985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.509999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.518222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.518236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.526881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.526896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.535305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.535319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.867 [2024-06-07 23:16:27.543914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:04.867 [2024-06-07 23:16:27.543929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.552451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.552465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.561129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.561143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.569700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.569714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.578281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.578295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.587103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.587118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.595351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.595364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.603609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.603623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.612321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.612335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.620431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.620445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.629386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.629401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.637971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.637985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.646659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.646674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.655184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.655198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.664070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.664084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.672624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.672638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.681368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.681382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.689829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.689843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.698257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.698271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.706843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.706857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.715512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.715526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.724483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.724497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.732516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.732531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.741453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.741467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.750199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.750214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.758798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.758812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.767317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.767331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.776114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.776128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.784544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.784558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.793015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.793029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.129 [2024-06-07 23:16:27.801845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.129 [2024-06-07 23:16:27.801859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.809937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.809952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.818345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.818359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.826852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.826867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.835207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.835221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.843898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.843912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.852786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.852801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.861339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.861353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.870224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.870239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.879167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.879181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.887720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.887734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.896622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.896637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.905298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.905312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.913905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.913920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.922447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.922462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.930989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.931003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.939588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.939602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.947756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.947771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.956820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.956834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.965389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.965403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.973952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.973967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.982109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.982123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.990619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.990634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:27.998992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:27.999006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:28.008127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:28.008142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:28.016597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:28.016611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:28.024943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:28.024958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:28.033580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:28.033594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.390 [2024-06-07 23:16:28.042054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.390 [2024-06-07 23:16:28.042072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.391 [2024-06-07 23:16:28.050806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.391 [2024-06-07 23:16:28.050820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.391 [2024-06-07 23:16:28.059606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.391 [2024-06-07 23:16:28.059620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.391 [2024-06-07 23:16:28.067540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.391 [2024-06-07 23:16:28.067554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.076208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.076224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.084721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.084735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.093533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.093547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.102384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.102398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.110618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.110632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.119100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.119114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.127700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.127715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.651 [2024-06-07 23:16:28.136201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.651 [2024-06-07 23:16:28.136215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.144726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.144741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.153408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.153423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.161772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.161787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.170554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.170569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.179453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.179467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.188007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.188022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.196361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.196375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.204783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.204800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.213255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.213268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.221851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.221864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.230540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.230553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.239329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.239343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.248347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.248362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.256922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.256936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.265928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.265942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.274172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.274186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.282933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.282948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.290554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.290568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.299746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.299761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.307821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.307835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.315746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.315760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.652 [2024-06-07 23:16:28.324508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.652 [2024-06-07 23:16:28.324522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.333127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.333141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.341119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.341133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.349724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.349738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.358601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.358616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.367268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.367286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.375285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.375298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.383713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.383726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.392028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.392041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.400816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.400830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.409447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.409461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.418317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.418331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.426735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.426748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.435073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.435087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.443761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.443774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.452402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.452416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.460896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.460909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.469205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.469219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.478081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.478095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.486528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.486541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.495114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.495128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.503226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.503240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.511471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.511484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.520019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.520033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.528037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.528054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.536967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.536981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.545575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.545590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.554996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.555010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.562950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.562964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.571917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.571931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.580153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.580167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:05.913 [2024-06-07 23:16:28.589178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:05.913 [2024-06-07 23:16:28.589192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.597856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.597870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.606412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.606426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.614974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.614988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.622923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.622937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.631583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.631596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.639582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.639596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.648422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.648436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.656626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.656639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.665467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.665481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.674221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.674235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.682628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.682642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.691335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.691349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.699711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.699725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.707910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.707924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.174 [2024-06-07 23:16:28.716690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.174 [2024-06-07 23:16:28.716704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.725272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.725286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.733726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.733740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.742720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.742734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.751213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.751227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.759940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.759954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.768259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.768274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.776771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.776785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.785165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.785179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.793720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.793734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.801793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.801806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.810617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.810630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.819515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.819529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.828278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.828291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.836580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.836594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.844752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.844766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.175 [2024-06-07 23:16:28.853294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.175 [2024-06-07 23:16:28.853308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.861703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.861717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.870397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.870411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.878573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.878587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.886847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.886860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.895687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.895701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.904143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.904157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.913008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.913022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.921290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.921304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.929721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.929735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.938006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.938020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.947180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.947195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.955511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.955525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.964196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.964209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.972801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.972815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.981124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.981138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.989524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.989538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:28.998257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:28.998270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.007188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.007202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.015900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.015913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.024474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.024488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.033231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.033248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.041716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.041731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.050148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.050162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.058516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.058530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.067435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.067449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.076206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.076219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.084275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.084289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.093048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.093062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.101979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.101993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.109810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.109824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.447 [2024-06-07 23:16:29.118876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.447 [2024-06-07 23:16:29.118890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.127735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.127749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.136650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.136664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.145391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.145405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.154287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.154301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.163198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.163211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.172005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.172019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.180683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.180697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.189021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.189034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.197557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.197572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.206215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.206229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.214733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.214746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.223436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.223450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.232020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.232034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.240916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.240931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.249998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.250013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.258349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.258364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.267431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.267445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.275123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.275138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.283760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.283773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.292628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.292642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.301415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.301429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.310346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.310360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.318609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.318623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.327469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.740 [2024-06-07 23:16:29.327483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.740 [2024-06-07 23:16:29.336111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.336129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.344676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.344690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.353317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.353331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.361754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.361768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.370813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.370827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.378350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.378364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.387625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.387639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.396379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.396393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.741 [2024-06-07 23:16:29.405042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.741 [2024-06-07 23:16:29.405056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.413983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.413997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.422543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.422557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.431224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.431239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.439923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.439937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.448618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.448632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.457739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.457753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.471143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.471158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.479015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.479029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.487577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.487591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.495744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.495759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.504304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.504322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.513104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.513118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.521649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.521663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.529999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.530013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.538975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.538989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.547384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.547398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.556260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.556275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.564581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.564595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.573000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.573015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.580997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.581011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.589704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.589718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.598264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.598279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.606676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.606690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.615601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.615615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.624375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.624388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.633003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.633017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.641603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.641617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.650219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.650234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.658728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.658742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.667204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.667221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.675658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.675672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.684460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.684474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.693234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.693253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.701793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.701807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.710356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.710371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.042 [2024-06-07 23:16:29.719160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.042 [2024-06-07 23:16:29.719174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.728138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.728152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.736655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.736669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.745286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.745301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.754149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.754163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.762621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.762635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.771468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.771482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.780109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.780123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.788412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.788426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.796376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.796390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.805352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.805367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.813637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.813651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.822110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.822125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.830865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.830883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.839769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.839783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.847910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.847924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.857071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.857085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.865381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.865395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.873943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.873957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.882632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.882645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.891445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.891459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.899602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.899616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.907966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.907980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.916414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.916429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.925149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.925163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.933184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.933197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.941633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.303 [2024-06-07 23:16:29.941646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.303 [2024-06-07 23:16:29.949625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.304 [2024-06-07 23:16:29.949638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.304 [2024-06-07 23:16:29.958424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.304 [2024-06-07 23:16:29.958438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.304 [2024-06-07 23:16:29.967486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.304 [2024-06-07 23:16:29.967500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.304 [2024-06-07 23:16:29.976338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.304 [2024-06-07 23:16:29.976352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:29.985017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:29.985031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:29.993620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:29.993634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.003118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.003136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.011542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.011556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.020054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.020067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.028977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.028991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.039317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.039337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.050860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.050875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.063642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.063657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.076280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.076296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.089669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.089683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.102566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.102580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.115630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.115644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.128808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.128823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.141562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.141577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.154469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.154484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.167632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.167646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.181120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.181135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.193991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.194006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.206727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.564 [2024-06-07 23:16:30.206741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.564 [2024-06-07 23:16:30.220027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.565 [2024-06-07 23:16:30.220041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.565 [2024-06-07 23:16:30.233043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.565 [2024-06-07 23:16:30.233058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.825 [2024-06-07 23:16:30.245931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.825 [2024-06-07 23:16:30.245946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.825 [2024-06-07 23:16:30.259002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.259017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.272118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.272131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.285454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.285469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.298166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.298181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.310942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.310957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.323412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.323427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.336432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.336447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.349051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.349066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.362110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.362125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.374685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.374699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.387353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.387367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.400429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.400443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.413356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.413370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.426432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.426446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.439127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.439141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.453526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.453540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.467985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.467999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.481166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.481180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.826 [2024-06-07 23:16:30.494371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.826 [2024-06-07 23:16:30.494385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.507333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.507347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.520390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.520404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.533426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.533440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.546569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.546582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.559376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.559390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.567411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.567425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.575873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.086 [2024-06-07 23:16:30.575887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.086 [2024-06-07 23:16:30.584131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.584145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.592227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.592240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 00:20:08.087 Latency(us) 00:20:08.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.087 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:08.087 Nvme1n1 : 5.01 19845.71 155.04 0.00 0.00 6443.17 2307.41 16930.13 00:20:08.087 =================================================================================================================== 00:20:08.087 Total : 19845.71 155.04 0.00 0.00 6443.17 2307.41 16930.13 00:20:08.087 [2024-06-07 23:16:30.598388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.598400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.606408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.606418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.614429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.614439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.622451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.622466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.630471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.630480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.638489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.638496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.646511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.646519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.654530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.654538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.662551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.662558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.670571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.670577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.678593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.678601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.686614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.686622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.694633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.694640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.702653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.702661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 [2024-06-07 23:16:30.710673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.087 [2024-06-07 23:16:30.710679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2840303) - No such process 00:20:08.087 23:16:30 -- target/zcopy.sh@49 -- # wait 2840303 00:20:08.087 23:16:30 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.087 23:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.087 23:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:08.087 23:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.087 23:16:30 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:08.087 23:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.087 23:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:08.087 delay0 00:20:08.087 23:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.087 23:16:30 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:08.087 23:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.087 23:16:30 -- common/autotest_common.sh@10 -- # set +x 00:20:08.087 23:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.087 23:16:30 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:08.347 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.347 [2024-06-07 23:16:30.842918] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:14.930 Initializing NVMe Controllers 00:20:14.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.930 Initialization complete. Launching workers. 00:20:14.930 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 251 00:20:14.930 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 537, failed to submit 34 00:20:14.930 success 376, unsuccess 161, failed 0 00:20:14.930 23:16:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:14.930 23:16:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:14.930 23:16:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.930 23:16:36 -- nvmf/common.sh@116 -- # sync 00:20:14.930 23:16:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.930 23:16:36 -- nvmf/common.sh@119 -- # set +e 00:20:14.930 23:16:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.930 23:16:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.930 rmmod nvme_tcp 00:20:14.930 rmmod nvme_fabrics 00:20:14.930 rmmod nvme_keyring 00:20:14.930 23:16:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:14.930 23:16:37 -- nvmf/common.sh@123 -- # set -e 00:20:14.930 23:16:37 -- nvmf/common.sh@124 -- # return 0 00:20:14.930 23:16:37 -- nvmf/common.sh@477 -- # '[' -n 2837972 ']' 00:20:14.930 23:16:37 -- nvmf/common.sh@478 -- # killprocess 2837972 00:20:14.930 23:16:37 -- common/autotest_common.sh@926 -- # '[' -z 2837972 ']' 00:20:14.930 23:16:37 -- common/autotest_common.sh@930 -- # kill -0 2837972 00:20:14.930 23:16:37 -- common/autotest_common.sh@931 -- # uname 00:20:14.930 23:16:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:14.930 23:16:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2837972 00:20:14.930 23:16:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:14.930 23:16:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:14.930 23:16:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2837972' 00:20:14.930 killing process with pid 2837972 00:20:14.930 23:16:37 -- common/autotest_common.sh@945 -- # kill 2837972 00:20:14.930 23:16:37 -- common/autotest_common.sh@950 -- # wait 2837972 00:20:14.930 23:16:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:14.930 23:16:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:14.930 23:16:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:14.930 23:16:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.930 23:16:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:14.930 23:16:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.930 23:16:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.930 23:16:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.845 23:16:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:16.845 00:20:16.845 real 0m32.617s 00:20:16.845 user 0m44.283s 00:20:16.845 sys 0m10.032s 00:20:16.845 23:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.845 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:16.845 ************************************ 00:20:16.845 END TEST nvmf_zcopy 00:20:16.845 ************************************ 00:20:16.845 23:16:39 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:16.845 23:16:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:16.845 23:16:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.845 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:16.845 ************************************ 00:20:16.845 START TEST nvmf_nmic 00:20:16.845 ************************************ 00:20:16.845 23:16:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:16.845 * Looking for test storage... 00:20:16.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.845 23:16:39 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.845 23:16:39 -- nvmf/common.sh@7 -- # uname -s 00:20:16.845 23:16:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.845 23:16:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.845 23:16:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.845 23:16:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.845 23:16:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.845 23:16:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.845 23:16:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.845 23:16:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.845 23:16:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.845 23:16:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.845 23:16:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.845 23:16:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:16.845 23:16:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.845 23:16:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.845 23:16:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.845 23:16:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.845 23:16:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.845 23:16:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.845 23:16:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.845 23:16:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.845 23:16:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.845 23:16:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.845 23:16:39 -- paths/export.sh@5 -- # export PATH 00:20:16.845 23:16:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.845 23:16:39 -- nvmf/common.sh@46 -- # : 0 00:20:16.845 23:16:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:16.845 23:16:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:16.845 23:16:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:16.845 23:16:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.845 23:16:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.845 23:16:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:16.845 23:16:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:16.845 23:16:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:16.845 23:16:39 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:16.845 23:16:39 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:16.845 23:16:39 -- target/nmic.sh@14 -- # nvmftestinit 00:20:16.845 23:16:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:16.845 23:16:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.845 23:16:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:16.845 23:16:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:16.845 23:16:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:16.845 23:16:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.845 23:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.845 23:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.845 23:16:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:16.845 23:16:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:16.845 23:16:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:16.846 23:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:24.986 23:16:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:24.986 23:16:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:24.986 23:16:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:24.986 23:16:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:24.986 23:16:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:24.986 23:16:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:24.986 23:16:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:24.986 23:16:46 -- nvmf/common.sh@294 -- # net_devs=() 00:20:24.986 23:16:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:24.986 23:16:46 -- nvmf/common.sh@295 -- # e810=() 00:20:24.986 23:16:46 -- nvmf/common.sh@295 -- # local -ga e810 00:20:24.986 23:16:46 -- nvmf/common.sh@296 -- # x722=() 00:20:24.986 23:16:46 -- nvmf/common.sh@296 -- # local -ga x722 00:20:24.986 23:16:46 -- nvmf/common.sh@297 -- # mlx=() 00:20:24.986 23:16:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:24.986 23:16:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.986 23:16:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:24.986 23:16:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:24.986 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:24.986 23:16:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:24.986 23:16:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:24.986 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:24.986 23:16:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:24.986 23:16:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.986 23:16:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.986 23:16:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:24.986 Found net devices under 0000:31:00.0: cvl_0_0 00:20:24.986 23:16:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:24.986 23:16:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.986 23:16:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.986 23:16:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:24.986 Found net devices under 0000:31:00.1: cvl_0_1 00:20:24.986 23:16:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:24.986 23:16:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:24.986 23:16:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.986 23:16:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.986 23:16:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:24.986 23:16:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.986 23:16:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.986 23:16:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:24.986 23:16:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.986 23:16:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.986 23:16:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:24.986 23:16:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:24.986 23:16:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.986 23:16:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.986 23:16:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.986 23:16:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.986 23:16:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:24.986 23:16:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.986 23:16:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.986 23:16:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.986 23:16:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:24.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:20:24.986 00:20:24.986 --- 10.0.0.2 ping statistics --- 00:20:24.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.986 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:20:24.986 23:16:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:20:24.986 00:20:24.986 --- 10.0.0.1 ping statistics --- 00:20:24.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.986 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:24.986 23:16:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.986 23:16:46 -- nvmf/common.sh@410 -- # return 0 00:20:24.986 23:16:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.986 23:16:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.986 23:16:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:24.986 23:16:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.986 23:16:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:24.986 23:16:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.986 23:16:46 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:24.987 23:16:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.987 23:16:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:24.987 23:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.987 23:16:46 -- nvmf/common.sh@469 -- # nvmfpid=2846788 00:20:24.987 23:16:46 -- nvmf/common.sh@470 -- # waitforlisten 2846788 00:20:24.987 23:16:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.987 23:16:46 -- common/autotest_common.sh@819 -- # '[' -z 2846788 ']' 00:20:24.987 23:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.987 23:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.987 23:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.987 23:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.987 23:16:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.987 [2024-06-07 23:16:46.825179] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:24.987 [2024-06-07 23:16:46.825260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.987 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.987 [2024-06-07 23:16:46.899968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.987 [2024-06-07 23:16:46.939364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.987 [2024-06-07 23:16:46.939521] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.987 [2024-06-07 23:16:46.939532] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.987 [2024-06-07 23:16:46.939541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.987 [2024-06-07 23:16:46.939698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.987 [2024-06-07 23:16:46.939822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.987 [2024-06-07 23:16:46.939983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.987 [2024-06-07 23:16:46.939984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.987 23:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:24.987 23:16:47 -- common/autotest_common.sh@852 -- # return 0 00:20:24.987 23:16:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.987 23:16:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:24.987 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:24.987 23:16:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.987 23:16:47 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.987 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.987 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:24.987 [2024-06-07 23:16:47.644549] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.987 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.987 23:16:47 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:24.987 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.987 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 Malloc0 00:20:25.247 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.247 23:16:47 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:25.247 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.247 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.247 23:16:47 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.247 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.247 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.247 23:16:47 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.247 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.247 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 [2024-06-07 23:16:47.703959] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.247 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.247 23:16:47 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:25.247 test case1: single bdev can't be used in multiple subsystems 00:20:25.247 23:16:47 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:25.247 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.247 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.248 23:16:47 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:25.248 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.248 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.248 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.248 23:16:47 -- target/nmic.sh@28 -- # nmic_status=0 00:20:25.248 23:16:47 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:25.248 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.248 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.248 [2024-06-07 23:16:47.739899] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:25.248 [2024-06-07 23:16:47.739918] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:25.248 [2024-06-07 23:16:47.739925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.248 request: 00:20:25.248 { 00:20:25.248 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.248 "namespace": { 00:20:25.248 "bdev_name": "Malloc0" 00:20:25.248 }, 00:20:25.248 "method": "nvmf_subsystem_add_ns", 00:20:25.248 "req_id": 1 00:20:25.248 } 00:20:25.248 Got JSON-RPC error response 00:20:25.248 response: 00:20:25.248 { 00:20:25.248 "code": -32602, 00:20:25.248 "message": "Invalid parameters" 00:20:25.248 } 00:20:25.248 23:16:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:25.248 23:16:47 -- target/nmic.sh@29 -- # nmic_status=1 00:20:25.248 23:16:47 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:25.248 23:16:47 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:25.248 Adding namespace failed - expected result. 00:20:25.248 23:16:47 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:25.248 test case2: host connect to nvmf target in multiple paths 00:20:25.248 23:16:47 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:25.248 23:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.248 23:16:47 -- common/autotest_common.sh@10 -- # set +x 00:20:25.248 [2024-06-07 23:16:47.752033] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.248 23:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.248 23:16:47 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:26.630 23:16:49 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:28.014 23:16:50 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:28.014 23:16:50 -- common/autotest_common.sh@1177 -- # local i=0 00:20:28.014 23:16:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:28.014 23:16:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:28.014 23:16:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:30.562 23:16:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:30.562 23:16:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:30.562 23:16:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:30.562 23:16:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:30.562 23:16:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:30.562 23:16:52 -- common/autotest_common.sh@1187 -- # return 0 00:20:30.562 23:16:52 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:30.562 [global] 00:20:30.562 thread=1 00:20:30.562 invalidate=1 00:20:30.562 rw=write 00:20:30.562 time_based=1 00:20:30.562 runtime=1 00:20:30.562 ioengine=libaio 00:20:30.562 direct=1 00:20:30.562 bs=4096 00:20:30.562 iodepth=1 00:20:30.562 norandommap=0 00:20:30.562 numjobs=1 00:20:30.562 00:20:30.562 verify_dump=1 00:20:30.562 verify_backlog=512 00:20:30.562 verify_state_save=0 00:20:30.562 do_verify=1 00:20:30.562 verify=crc32c-intel 00:20:30.562 [job0] 00:20:30.562 filename=/dev/nvme0n1 00:20:30.562 Could not set queue depth (nvme0n1) 00:20:30.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:30.562 fio-3.35 00:20:30.562 Starting 1 thread 00:20:31.947 00:20:31.947 job0: (groupid=0, jobs=1): err= 0: pid=2848306: Fri Jun 7 23:16:54 2024 00:20:31.947 read: IOPS=16, BW=65.7KiB/s (67.3kB/s)(68.0KiB/1035msec) 00:20:31.947 slat (nsec): min=23779, max=25660, avg=24722.35, stdev=513.28 00:20:31.947 clat (usec): min=1195, max=42975, avg=39626.46, stdev=9911.69 00:20:31.947 lat (usec): min=1219, max=42999, avg=39651.19, stdev=9911.85 00:20:31.947 clat percentiles (usec): 00:20:31.947 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41681], 00:20:31.947 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:31.947 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:20:31.947 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:31.947 | 99.99th=[42730] 00:20:31.947 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:20:31.947 slat (nsec): min=9397, max=65952, avg=27276.27, stdev=9272.53 00:20:31.947 clat (usec): min=398, max=889, avg=670.69, stdev=84.62 00:20:31.947 lat (usec): min=410, max=934, avg=697.97, stdev=88.83 00:20:31.947 clat percentiles (usec): 00:20:31.947 | 1.00th=[ 429], 5.00th=[ 529], 10.00th=[ 545], 20.00th=[ 611], 00:20:31.947 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:20:31.947 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 766], 95.00th=[ 783], 00:20:31.947 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 889], 99.95th=[ 889], 00:20:31.947 | 99.99th=[ 889] 00:20:31.947 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:31.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:31.947 lat (usec) : 500=2.84%, 750=78.26%, 1000=15.69% 00:20:31.947 lat (msec) : 2=0.19%, 50=3.02% 00:20:31.947 cpu : usr=1.16%, sys=0.87%, ctx=529, majf=0, minf=1 00:20:31.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.947 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.947 00:20:31.947 Run status group 0 (all jobs): 00:20:31.947 READ: bw=65.7KiB/s (67.3kB/s), 65.7KiB/s-65.7KiB/s (67.3kB/s-67.3kB/s), io=68.0KiB (69.6kB), run=1035-1035msec 00:20:31.947 WRITE: bw=1979KiB/s (2026kB/s), 1979KiB/s-1979KiB/s (2026kB/s-2026kB/s), io=2048KiB (2097kB), run=1035-1035msec 00:20:31.947 00:20:31.947 Disk stats (read/write): 00:20:31.947 nvme0n1: ios=63/512, merge=0/0, ticks=576/334, in_queue=910, util=93.89% 00:20:31.947 23:16:54 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:31.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:31.947 23:16:54 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:31.947 23:16:54 -- common/autotest_common.sh@1198 -- # local i=0 00:20:31.947 23:16:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:31.947 23:16:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.947 23:16:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:31.947 23:16:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.947 23:16:54 -- common/autotest_common.sh@1210 -- # return 0 00:20:31.947 23:16:54 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:31.947 23:16:54 -- target/nmic.sh@53 -- # nvmftestfini 00:20:31.947 23:16:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:31.947 23:16:54 -- nvmf/common.sh@116 -- # sync 00:20:31.947 23:16:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:31.947 23:16:54 -- nvmf/common.sh@119 -- # set +e 00:20:31.947 23:16:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:31.947 23:16:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:31.947 rmmod nvme_tcp 00:20:31.947 rmmod nvme_fabrics 00:20:31.947 rmmod nvme_keyring 00:20:31.947 23:16:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:31.947 23:16:54 -- nvmf/common.sh@123 -- # set -e 00:20:31.947 23:16:54 -- nvmf/common.sh@124 -- # return 0 00:20:31.947 23:16:54 -- nvmf/common.sh@477 -- # '[' -n 2846788 ']' 00:20:31.947 23:16:54 -- nvmf/common.sh@478 -- # killprocess 2846788 00:20:31.947 23:16:54 -- common/autotest_common.sh@926 -- # '[' -z 2846788 ']' 00:20:31.947 23:16:54 -- common/autotest_common.sh@930 -- # kill -0 2846788 00:20:31.947 23:16:54 -- common/autotest_common.sh@931 -- # uname 00:20:31.947 23:16:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:31.947 23:16:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2846788 00:20:31.947 23:16:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:31.947 23:16:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:31.947 23:16:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2846788' 00:20:31.947 killing process with pid 2846788 00:20:31.947 23:16:54 -- common/autotest_common.sh@945 -- # kill 2846788 00:20:31.947 23:16:54 -- common/autotest_common.sh@950 -- # wait 2846788 00:20:32.207 23:16:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:32.207 23:16:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:32.207 23:16:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:32.207 23:16:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.207 23:16:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:32.208 23:16:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.208 23:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.208 23:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.121 23:16:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:34.121 00:20:34.121 real 0m17.487s 00:20:34.121 user 0m47.539s 00:20:34.121 sys 0m6.164s 00:20:34.121 23:16:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.121 23:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.121 ************************************ 00:20:34.121 END TEST nvmf_nmic 00:20:34.121 ************************************ 00:20:34.382 23:16:56 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:34.382 23:16:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:34.382 23:16:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:34.382 23:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.382 ************************************ 00:20:34.382 START TEST nvmf_fio_target 00:20:34.382 ************************************ 00:20:34.382 23:16:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:34.382 * Looking for test storage... 00:20:34.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.382 23:16:56 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.382 23:16:56 -- nvmf/common.sh@7 -- # uname -s 00:20:34.382 23:16:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.382 23:16:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.382 23:16:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.382 23:16:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.382 23:16:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.382 23:16:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.382 23:16:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.382 23:16:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.382 23:16:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.382 23:16:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.382 23:16:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:34.382 23:16:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:34.382 23:16:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.382 23:16:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.382 23:16:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.382 23:16:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.382 23:16:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.382 23:16:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.382 23:16:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.382 23:16:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.383 23:16:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.383 23:16:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.383 23:16:56 -- paths/export.sh@5 -- # export PATH 00:20:34.383 23:16:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.383 23:16:56 -- nvmf/common.sh@46 -- # : 0 00:20:34.383 23:16:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:34.383 23:16:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:34.383 23:16:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:34.383 23:16:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.383 23:16:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.383 23:16:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:34.383 23:16:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:34.383 23:16:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:34.383 23:16:56 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.383 23:16:56 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.383 23:16:56 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:34.383 23:16:56 -- target/fio.sh@16 -- # nvmftestinit 00:20:34.383 23:16:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:34.383 23:16:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.383 23:16:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:34.383 23:16:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:34.383 23:16:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:34.383 23:16:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.383 23:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.383 23:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.383 23:16:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:34.383 23:16:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:34.383 23:16:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:34.383 23:16:56 -- common/autotest_common.sh@10 -- # set +x 00:20:40.968 23:17:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.968 23:17:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:40.968 23:17:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:40.968 23:17:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:40.968 23:17:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:40.968 23:17:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:40.968 23:17:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:40.968 23:17:03 -- nvmf/common.sh@294 -- # net_devs=() 00:20:40.968 23:17:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:40.968 23:17:03 -- nvmf/common.sh@295 -- # e810=() 00:20:40.968 23:17:03 -- nvmf/common.sh@295 -- # local -ga e810 00:20:40.968 23:17:03 -- nvmf/common.sh@296 -- # x722=() 00:20:40.968 23:17:03 -- nvmf/common.sh@296 -- # local -ga x722 00:20:40.968 23:17:03 -- nvmf/common.sh@297 -- # mlx=() 00:20:40.968 23:17:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:40.968 23:17:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.968 23:17:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:40.968 23:17:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:40.968 23:17:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:40.968 23:17:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.968 23:17:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:40.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:40.968 23:17:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.968 23:17:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:40.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:40.968 23:17:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:40.968 23:17:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:40.968 23:17:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:40.969 23:17:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.969 23:17:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.969 23:17:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.969 23:17:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.969 23:17:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:40.969 Found net devices under 0000:31:00.0: cvl_0_0 00:20:40.969 23:17:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.969 23:17:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.969 23:17:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.969 23:17:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.969 23:17:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.969 23:17:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:40.969 Found net devices under 0000:31:00.1: cvl_0_1 00:20:40.969 23:17:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.969 23:17:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:40.969 23:17:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:40.969 23:17:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:40.969 23:17:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:40.969 23:17:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:40.969 23:17:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.969 23:17:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.969 23:17:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.969 23:17:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:40.969 23:17:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.969 23:17:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.969 23:17:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:40.969 23:17:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.969 23:17:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.969 23:17:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:40.969 23:17:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:40.969 23:17:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.969 23:17:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.969 23:17:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.969 23:17:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.969 23:17:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:40.969 23:17:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.969 23:17:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.969 23:17:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.969 23:17:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:41.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:20:41.230 00:20:41.230 --- 10.0.0.2 ping statistics --- 00:20:41.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.230 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:20:41.230 23:17:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:41.230 00:20:41.230 --- 10.0.0.1 ping statistics --- 00:20:41.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.230 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:41.230 23:17:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.230 23:17:03 -- nvmf/common.sh@410 -- # return 0 00:20:41.230 23:17:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:41.230 23:17:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.230 23:17:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:41.230 23:17:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:41.230 23:17:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.230 23:17:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:41.230 23:17:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:41.230 23:17:03 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:41.230 23:17:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:41.230 23:17:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:41.230 23:17:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.230 23:17:03 -- nvmf/common.sh@469 -- # nvmfpid=2852723 00:20:41.230 23:17:03 -- nvmf/common.sh@470 -- # waitforlisten 2852723 00:20:41.230 23:17:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.230 23:17:03 -- common/autotest_common.sh@819 -- # '[' -z 2852723 ']' 00:20:41.230 23:17:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.230 23:17:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:41.230 23:17:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.230 23:17:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:41.230 23:17:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.230 [2024-06-07 23:17:03.743695] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:41.230 [2024-06-07 23:17:03.743744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.230 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.230 [2024-06-07 23:17:03.810284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.230 [2024-06-07 23:17:03.839918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.230 [2024-06-07 23:17:03.840056] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.230 [2024-06-07 23:17:03.840066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.230 [2024-06-07 23:17:03.840075] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.230 [2024-06-07 23:17:03.840275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.230 [2024-06-07 23:17:03.840352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.230 [2024-06-07 23:17:03.840685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.230 [2024-06-07 23:17:03.840686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.171 23:17:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.171 23:17:04 -- common/autotest_common.sh@852 -- # return 0 00:20:42.171 23:17:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:42.171 23:17:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:42.171 23:17:04 -- common/autotest_common.sh@10 -- # set +x 00:20:42.171 23:17:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.171 23:17:04 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:42.171 [2024-06-07 23:17:04.685068] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.171 23:17:04 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.432 23:17:04 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:42.432 23:17:04 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.432 23:17:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:42.432 23:17:05 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.693 23:17:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:42.693 23:17:05 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.954 23:17:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:42.954 23:17:05 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:42.954 23:17:05 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.215 23:17:05 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:43.215 23:17:05 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.476 23:17:05 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:43.476 23:17:05 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.476 23:17:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:43.476 23:17:06 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:43.737 23:17:06 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:43.998 23:17:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.998 23:17:06 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.998 23:17:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:43.998 23:17:06 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.259 23:17:06 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.259 [2024-06-07 23:17:06.890480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.259 23:17:06 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:44.520 23:17:07 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:44.781 23:17:07 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:46.166 23:17:08 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:46.166 23:17:08 -- common/autotest_common.sh@1177 -- # local i=0 00:20:46.166 23:17:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:46.166 23:17:08 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:46.166 23:17:08 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:46.166 23:17:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:48.712 23:17:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:48.712 23:17:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:48.712 23:17:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:48.712 23:17:10 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:48.712 23:17:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:48.712 23:17:10 -- common/autotest_common.sh@1187 -- # return 0 00:20:48.712 23:17:10 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:48.712 [global] 00:20:48.712 thread=1 00:20:48.712 invalidate=1 00:20:48.712 rw=write 00:20:48.712 time_based=1 00:20:48.712 runtime=1 00:20:48.712 ioengine=libaio 00:20:48.712 direct=1 00:20:48.712 bs=4096 00:20:48.712 iodepth=1 00:20:48.712 norandommap=0 00:20:48.712 numjobs=1 00:20:48.712 00:20:48.712 verify_dump=1 00:20:48.712 verify_backlog=512 00:20:48.712 verify_state_save=0 00:20:48.712 do_verify=1 00:20:48.712 verify=crc32c-intel 00:20:48.712 [job0] 00:20:48.712 filename=/dev/nvme0n1 00:20:48.712 [job1] 00:20:48.712 filename=/dev/nvme0n2 00:20:48.712 [job2] 00:20:48.712 filename=/dev/nvme0n3 00:20:48.712 [job3] 00:20:48.712 filename=/dev/nvme0n4 00:20:48.712 Could not set queue depth (nvme0n1) 00:20:48.712 Could not set queue depth (nvme0n2) 00:20:48.712 Could not set queue depth (nvme0n3) 00:20:48.712 Could not set queue depth (nvme0n4) 00:20:48.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:48.712 fio-3.35 00:20:48.712 Starting 4 threads 00:20:50.125 00:20:50.125 job0: (groupid=0, jobs=1): err= 0: pid=2854335: Fri Jun 7 23:17:12 2024 00:20:50.125 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1026msec) 00:20:50.125 slat (nsec): min=25274, max=26154, avg=25467.58, stdev=210.02 00:20:50.125 clat (usec): min=1003, max=42827, avg=39580.36, stdev=9362.58 00:20:50.125 lat (usec): min=1029, max=42852, avg=39605.83, stdev=9362.56 00:20:50.125 clat percentiles (usec): 00:20:50.125 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[40633], 20.00th=[41157], 00:20:50.125 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:50.125 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:20:50.125 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:50.125 | 99.99th=[42730] 00:20:50.125 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:20:50.125 slat (usec): min=9, max=7094, avg=44.85, stdev=319.53 00:20:50.125 clat (usec): min=110, max=973, avg=477.87, stdev=169.98 00:20:50.125 lat (usec): min=121, max=7349, avg=522.72, stdev=353.79 00:20:50.125 clat percentiles (usec): 00:20:50.125 | 1.00th=[ 124], 5.00th=[ 174], 10.00th=[ 255], 20.00th=[ 314], 00:20:50.126 | 30.00th=[ 388], 40.00th=[ 433], 50.00th=[ 482], 60.00th=[ 523], 00:20:50.126 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 742], 00:20:50.126 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 971], 99.95th=[ 971], 00:20:50.126 | 99.99th=[ 971] 00:20:50.126 bw ( KiB/s): min= 4096, max= 4096, per=41.98%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.126 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.126 lat (usec) : 250=9.42%, 500=43.13%, 750=39.36%, 1000=4.52% 00:20:50.126 lat (msec) : 2=0.19%, 50=3.39% 00:20:50.126 cpu : usr=0.59%, sys=1.46%, ctx=534, majf=0, minf=1 00:20:50.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.126 job1: (groupid=0, jobs=1): err= 0: pid=2854340: Fri Jun 7 23:17:12 2024 00:20:50.126 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:50.126 slat (nsec): min=5682, max=60799, avg=25841.25, stdev=5372.34 00:20:50.126 clat (usec): min=384, max=980, avg=849.14, stdev=81.68 00:20:50.126 lat (usec): min=391, max=1006, avg=874.98, stdev=83.07 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[ 603], 5.00th=[ 660], 10.00th=[ 742], 20.00th=[ 799], 00:20:50.126 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 865], 60.00th=[ 881], 00:20:50.126 | 70.00th=[ 898], 80.00th=[ 906], 90.00th=[ 930], 95.00th=[ 947], 00:20:50.126 | 99.00th=[ 971], 99.50th=[ 971], 99.90th=[ 979], 99.95th=[ 979], 00:20:50.126 | 99.99th=[ 979] 00:20:50.126 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4012KiB/1001msec); 0 zone resets 00:20:50.126 slat (nsec): min=8515, max=74344, avg=28207.77, stdev=11639.17 00:20:50.126 clat (usec): min=171, max=1001, avg=511.87, stdev=101.35 00:20:50.126 lat (usec): min=179, max=1034, avg=540.08, stdev=106.58 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[ 239], 5.00th=[ 330], 10.00th=[ 400], 20.00th=[ 433], 00:20:50.126 | 30.00th=[ 469], 40.00th=[ 498], 50.00th=[ 519], 60.00th=[ 537], 00:20:50.126 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 668], 00:20:50.126 | 99.00th=[ 775], 99.50th=[ 816], 99.90th=[ 881], 99.95th=[ 1004], 00:20:50.126 | 99.99th=[ 1004] 00:20:50.126 bw ( KiB/s): min= 4096, max= 4096, per=41.98%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.126 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.126 lat (usec) : 250=0.86%, 500=26.20%, 750=42.11%, 1000=30.76% 00:20:50.126 lat (msec) : 2=0.07% 00:20:50.126 cpu : usr=3.70%, sys=4.70%, ctx=1518, majf=0, minf=1 00:20:50.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 issued rwts: total=512,1003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.126 job2: (groupid=0, jobs=1): err= 0: pid=2854348: Fri Jun 7 23:17:12 2024 00:20:50.126 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:20:50.126 slat (nsec): min=24310, max=43844, avg=25928.24, stdev=4202.72 00:20:50.126 clat (usec): min=808, max=43092, avg=38317.53, stdev=12432.16 00:20:50.126 lat (usec): min=834, max=43116, avg=38343.45, stdev=12431.88 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[ 807], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:20:50.126 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:50.126 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:20:50.126 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:50.126 | 99.99th=[43254] 00:20:50.126 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:20:50.126 slat (nsec): min=9148, max=65138, avg=26739.43, stdev=10155.71 00:20:50.126 clat (usec): min=136, max=1169, avg=406.84, stdev=208.20 00:20:50.126 lat (usec): min=146, max=1201, avg=433.58, stdev=211.96 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 255], 00:20:50.126 | 30.00th=[ 281], 40.00th=[ 302], 50.00th=[ 347], 60.00th=[ 408], 00:20:50.126 | 70.00th=[ 453], 80.00th=[ 570], 90.00th=[ 750], 95.00th=[ 840], 00:20:50.126 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1172], 99.95th=[ 1172], 00:20:50.126 | 99.99th=[ 1172] 00:20:50.126 bw ( KiB/s): min= 4096, max= 4096, per=41.98%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.126 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.126 lat (usec) : 250=17.64%, 500=53.66%, 750=15.01%, 1000=9.76% 00:20:50.126 lat (msec) : 2=0.38%, 50=3.56% 00:20:50.126 cpu : usr=0.49%, sys=1.55%, ctx=534, majf=0, minf=1 00:20:50.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.126 job3: (groupid=0, jobs=1): err= 0: pid=2854352: Fri Jun 7 23:17:12 2024 00:20:50.126 read: IOPS=16, BW=65.3KiB/s (66.9kB/s)(68.0KiB/1041msec) 00:20:50.126 slat (nsec): min=26552, max=32010, avg=29021.35, stdev=2492.54 00:20:50.126 clat (usec): min=40966, max=42998, avg=41804.06, stdev=502.16 00:20:50.126 lat (usec): min=40998, max=43024, avg=41833.09, stdev=500.66 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:20:50.126 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:50.126 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:20:50.126 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:50.126 | 99.99th=[43254] 00:20:50.126 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:20:50.126 slat (usec): min=5, max=7204, avg=48.94, stdev=335.27 00:20:50.126 clat (usec): min=342, max=3307, avg=585.10, stdev=183.21 00:20:50.126 lat (usec): min=361, max=8127, avg=634.04, stdev=397.85 00:20:50.126 clat percentiles (usec): 00:20:50.126 | 1.00th=[ 367], 5.00th=[ 408], 10.00th=[ 433], 20.00th=[ 486], 00:20:50.126 | 30.00th=[ 506], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 562], 00:20:50.126 | 70.00th=[ 603], 80.00th=[ 709], 90.00th=[ 807], 95.00th=[ 873], 00:20:50.126 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 3294], 99.95th=[ 3294], 00:20:50.126 | 99.99th=[ 3294] 00:20:50.126 bw ( KiB/s): min= 4096, max= 4096, per=41.98%, avg=4096.00, stdev= 0.00, samples=1 00:20:50.126 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:50.126 lat (usec) : 500=25.90%, 750=55.58%, 1000=15.12% 00:20:50.126 lat (msec) : 4=0.19%, 50=3.21% 00:20:50.126 cpu : usr=0.87%, sys=1.73%, ctx=535, majf=0, minf=1 00:20:50.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.126 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.126 00:20:50.126 Run status group 0 (all jobs): 00:20:50.126 READ: bw=2186KiB/s (2239kB/s), 65.3KiB/s-2046KiB/s (66.9kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1041msec 00:20:50.126 WRITE: bw=9756KiB/s (9990kB/s), 1967KiB/s-4008KiB/s (2015kB/s-4104kB/s), io=9.92MiB (10.4MB), run=1001-1041msec 00:20:50.126 00:20:50.126 Disk stats (read/write): 00:20:50.126 nvme0n1: ios=65/512, merge=0/0, ticks=641/223, in_queue=864, util=87.17% 00:20:50.126 nvme0n2: ios=561/691, merge=0/0, ticks=934/295, in_queue=1229, util=87.86% 00:20:50.126 nvme0n3: ios=73/512, merge=0/0, ticks=690/202, in_queue=892, util=95.03% 00:20:50.126 nvme0n4: ios=79/512, merge=0/0, ticks=1009/263, in_queue=1272, util=97.64% 00:20:50.126 23:17:12 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:50.126 [global] 00:20:50.126 thread=1 00:20:50.126 invalidate=1 00:20:50.126 rw=randwrite 00:20:50.126 time_based=1 00:20:50.126 runtime=1 00:20:50.126 ioengine=libaio 00:20:50.126 direct=1 00:20:50.126 bs=4096 00:20:50.126 iodepth=1 00:20:50.126 norandommap=0 00:20:50.126 numjobs=1 00:20:50.126 00:20:50.126 verify_dump=1 00:20:50.126 verify_backlog=512 00:20:50.126 verify_state_save=0 00:20:50.126 do_verify=1 00:20:50.126 verify=crc32c-intel 00:20:50.126 [job0] 00:20:50.126 filename=/dev/nvme0n1 00:20:50.126 [job1] 00:20:50.126 filename=/dev/nvme0n2 00:20:50.126 [job2] 00:20:50.126 filename=/dev/nvme0n3 00:20:50.126 [job3] 00:20:50.126 filename=/dev/nvme0n4 00:20:50.126 Could not set queue depth (nvme0n1) 00:20:50.126 Could not set queue depth (nvme0n2) 00:20:50.126 Could not set queue depth (nvme0n3) 00:20:50.126 Could not set queue depth (nvme0n4) 00:20:50.412 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.412 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.412 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.412 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.412 fio-3.35 00:20:50.412 Starting 4 threads 00:20:51.844 00:20:51.844 job0: (groupid=0, jobs=1): err= 0: pid=2854858: Fri Jun 7 23:17:14 2024 00:20:51.844 read: IOPS=150, BW=603KiB/s (618kB/s)(604KiB/1001msec) 00:20:51.844 slat (nsec): min=9581, max=43878, avg=26541.12, stdev=4869.18 00:20:51.844 clat (usec): min=520, max=43049, avg=4419.53, stdev=11650.83 00:20:51.844 lat (usec): min=546, max=43074, avg=4446.07, stdev=11650.04 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 627], 20.00th=[ 734], 00:20:51.844 | 30.00th=[ 783], 40.00th=[ 857], 50.00th=[ 898], 60.00th=[ 971], 00:20:51.844 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[42206], 00:20:51.844 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:51.844 | 99.99th=[43254] 00:20:51.844 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:51.844 slat (nsec): min=9389, max=54650, avg=28529.65, stdev=8589.91 00:20:51.844 clat (usec): min=193, max=1907, avg=604.63, stdev=157.66 00:20:51.844 lat (usec): min=208, max=1938, avg=633.16, stdev=159.87 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 269], 5.00th=[ 351], 10.00th=[ 429], 20.00th=[ 486], 00:20:51.844 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:20:51.844 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 824], 00:20:51.844 | 99.00th=[ 955], 99.50th=[ 1045], 99.90th=[ 1909], 99.95th=[ 1909], 00:20:51.844 | 99.99th=[ 1909] 00:20:51.844 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:20:51.844 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:20:51.844 lat (usec) : 250=0.60%, 500=17.65%, 750=54.00%, 1000=21.12% 00:20:51.844 lat (msec) : 2=4.68%, 50=1.96% 00:20:51.844 cpu : usr=0.60%, sys=2.30%, ctx=667, majf=0, minf=1 00:20:51.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 issued rwts: total=151,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.844 job1: (groupid=0, jobs=1): err= 0: pid=2854859: Fri Jun 7 23:17:14 2024 00:20:51.844 read: IOPS=485, BW=1942KiB/s (1989kB/s)(1944KiB/1001msec) 00:20:51.844 slat (nsec): min=24983, max=60140, avg=25994.47, stdev=3612.93 00:20:51.844 clat (usec): min=799, max=42483, avg=1289.15, stdev=2635.10 00:20:51.844 lat (usec): min=825, max=42508, avg=1315.14, stdev=2635.09 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 906], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1074], 00:20:51.844 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:20:51.844 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1205], 00:20:51.844 | 99.00th=[ 1270], 99.50th=[ 1467], 99.90th=[42730], 99.95th=[42730], 00:20:51.844 | 99.99th=[42730] 00:20:51.844 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:51.844 slat (nsec): min=8701, max=68300, avg=30014.81, stdev=7813.81 00:20:51.844 clat (usec): min=245, max=1626, avg=658.87, stdev=157.07 00:20:51.844 lat (usec): min=270, max=1639, avg=688.88, stdev=158.51 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 314], 5.00th=[ 388], 10.00th=[ 449], 20.00th=[ 537], 00:20:51.844 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:20:51.844 | 70.00th=[ 742], 80.00th=[ 791], 90.00th=[ 848], 95.00th=[ 906], 00:20:51.844 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1631], 99.95th=[ 1631], 00:20:51.844 | 99.99th=[ 1631] 00:20:51.844 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:51.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:51.844 lat (usec) : 250=0.10%, 500=8.22%, 750=28.26%, 1000=16.43% 00:20:51.844 lat (msec) : 2=46.79%, 50=0.20% 00:20:51.844 cpu : usr=1.90%, sys=4.00%, ctx=999, majf=0, minf=1 00:20:51.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 issued rwts: total=486,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.844 job2: (groupid=0, jobs=1): err= 0: pid=2854861: Fri Jun 7 23:17:14 2024 00:20:51.844 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:20:51.844 slat (nsec): min=26662, max=27380, avg=27088.72, stdev=207.67 00:20:51.844 clat (usec): min=40985, max=42256, avg=41925.69, stdev=250.65 00:20:51.844 lat (usec): min=41012, max=42283, avg=41952.78, stdev=250.59 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:20:51.844 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:51.844 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:51.844 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:51.844 | 99.99th=[42206] 00:20:51.844 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:20:51.844 slat (nsec): min=8636, max=56748, avg=28705.02, stdev=11623.54 00:20:51.844 clat (usec): min=228, max=3658, avg=513.39, stdev=157.05 00:20:51.844 lat (usec): min=236, max=3692, avg=542.09, stdev=159.15 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 310], 5.00th=[ 375], 10.00th=[ 412], 20.00th=[ 441], 00:20:51.844 | 30.00th=[ 482], 40.00th=[ 510], 50.00th=[ 523], 60.00th=[ 537], 00:20:51.844 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 603], 00:20:51.844 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 3654], 99.95th=[ 3654], 00:20:51.844 | 99.99th=[ 3654] 00:20:51.844 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:51.844 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:51.844 lat (usec) : 250=0.19%, 500=34.15%, 750=61.89%, 1000=0.19% 00:20:51.844 lat (msec) : 4=0.19%, 50=3.40% 00:20:51.844 cpu : usr=0.97%, sys=1.83%, ctx=533, majf=0, minf=1 00:20:51.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.844 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.844 job3: (groupid=0, jobs=1): err= 0: pid=2854865: Fri Jun 7 23:17:14 2024 00:20:51.844 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:20:51.844 slat (nsec): min=9563, max=27805, avg=25149.29, stdev=5507.75 00:20:51.844 clat (usec): min=1198, max=42961, avg=39655.03, stdev=9913.97 00:20:51.844 lat (usec): min=1210, max=42988, avg=39680.18, stdev=9917.41 00:20:51.844 clat percentiles (usec): 00:20:51.844 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:20:51.844 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:51.844 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:20:51.844 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:51.844 | 99.99th=[42730] 00:20:51.844 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:20:51.844 slat (nsec): min=9001, max=58091, avg=31210.32, stdev=8826.89 00:20:51.845 clat (usec): min=174, max=1667, avg=653.39, stdev=171.95 00:20:51.845 lat (usec): min=183, max=1700, avg=684.60, stdev=174.82 00:20:51.845 clat percentiles (usec): 00:20:51.845 | 1.00th=[ 219], 5.00th=[ 334], 10.00th=[ 437], 20.00th=[ 529], 00:20:51.845 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 701], 00:20:51.845 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 865], 95.00th=[ 922], 00:20:51.845 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1663], 99.95th=[ 1663], 00:20:51.845 | 99.99th=[ 1663] 00:20:51.845 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:51.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:51.845 lat (usec) : 250=1.70%, 500=13.80%, 750=55.20%, 1000=24.57% 00:20:51.845 lat (msec) : 2=1.70%, 50=3.02% 00:20:51.845 cpu : usr=1.36%, sys=1.75%, ctx=531, majf=0, minf=1 00:20:51.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.845 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.845 00:20:51.845 Run status group 0 (all jobs): 00:20:51.845 READ: bw=2592KiB/s (2654kB/s), 66.0KiB/s-1942KiB/s (67.6kB/s-1989kB/s), io=2688KiB (2753kB), run=1001-1037msec 00:20:51.845 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2046KiB/s (2022kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1037msec 00:20:51.845 00:20:51.845 Disk stats (read/write): 00:20:51.845 nvme0n1: ios=62/512, merge=0/0, ticks=1179/292, in_queue=1471, util=88.58% 00:20:51.845 nvme0n2: ios=395/512, merge=0/0, ticks=533/258, in_queue=791, util=91.75% 00:20:51.845 nvme0n3: ios=71/512, merge=0/0, ticks=819/219, in_queue=1038, util=96.84% 00:20:51.845 nvme0n4: ios=55/512, merge=0/0, ticks=1261/260, in_queue=1521, util=99.36% 00:20:51.845 23:17:14 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:51.845 [global] 00:20:51.845 thread=1 00:20:51.845 invalidate=1 00:20:51.845 rw=write 00:20:51.845 time_based=1 00:20:51.845 runtime=1 00:20:51.845 ioengine=libaio 00:20:51.845 direct=1 00:20:51.845 bs=4096 00:20:51.845 iodepth=128 00:20:51.845 norandommap=0 00:20:51.845 numjobs=1 00:20:51.845 00:20:51.845 verify_dump=1 00:20:51.845 verify_backlog=512 00:20:51.845 verify_state_save=0 00:20:51.845 do_verify=1 00:20:51.845 verify=crc32c-intel 00:20:51.845 [job0] 00:20:51.845 filename=/dev/nvme0n1 00:20:51.845 [job1] 00:20:51.845 filename=/dev/nvme0n2 00:20:51.845 [job2] 00:20:51.845 filename=/dev/nvme0n3 00:20:51.845 [job3] 00:20:51.845 filename=/dev/nvme0n4 00:20:51.845 Could not set queue depth (nvme0n1) 00:20:51.845 Could not set queue depth (nvme0n2) 00:20:51.845 Could not set queue depth (nvme0n3) 00:20:51.845 Could not set queue depth (nvme0n4) 00:20:52.107 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.107 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.107 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.107 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:52.107 fio-3.35 00:20:52.107 Starting 4 threads 00:20:53.495 00:20:53.495 job0: (groupid=0, jobs=1): err= 0: pid=2855394: Fri Jun 7 23:17:15 2024 00:20:53.495 read: IOPS=6148, BW=24.0MiB/s (25.2MB/s)(24.2MiB/1008msec) 00:20:53.495 slat (nsec): min=965, max=9230.3k, avg=81090.03, stdev=585945.16 00:20:53.495 clat (usec): min=1981, max=28785, avg=10793.72, stdev=3294.20 00:20:53.495 lat (usec): min=1987, max=28794, avg=10874.81, stdev=3311.67 00:20:53.495 clat percentiles (usec): 00:20:53.495 | 1.00th=[ 4948], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 8356], 00:20:53.495 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10814], 00:20:53.495 | 70.00th=[11863], 80.00th=[13566], 90.00th=[15139], 95.00th=[16450], 00:20:53.495 | 99.00th=[19006], 99.50th=[23725], 99.90th=[28705], 99.95th=[28705], 00:20:53.495 | 99.99th=[28705] 00:20:53.495 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:20:53.495 slat (nsec): min=1670, max=9103.8k, avg=69511.65, stdev=453545.86 00:20:53.495 clat (usec): min=1324, max=25269, avg=9095.50, stdev=2890.42 00:20:53.495 lat (usec): min=1424, max=25277, avg=9165.01, stdev=2902.02 00:20:53.495 clat percentiles (usec): 00:20:53.495 | 1.00th=[ 3294], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 6456], 00:20:53.495 | 30.00th=[ 7242], 40.00th=[ 8291], 50.00th=[ 9634], 60.00th=[10290], 00:20:53.495 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[14615], 00:20:53.495 | 99.00th=[18482], 99.50th=[19792], 99.90th=[22938], 99.95th=[25297], 00:20:53.495 | 99.99th=[25297] 00:20:53.495 bw ( KiB/s): min=25096, max=27568, per=22.82%, avg=26332.00, stdev=1747.97, samples=2 00:20:53.496 iops : min= 6274, max= 6892, avg=6583.00, stdev=436.99, samples=2 00:20:53.496 lat (msec) : 2=0.09%, 4=1.64%, 10=48.16%, 20=49.46%, 50=0.65% 00:20:53.496 cpu : usr=5.16%, sys=6.45%, ctx=516, majf=0, minf=1 00:20:53.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:53.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:53.496 issued rwts: total=6198,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:53.496 job1: (groupid=0, jobs=1): err= 0: pid=2855395: Fri Jun 7 23:17:15 2024 00:20:53.496 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:20:53.496 slat (nsec): min=946, max=7113.9k, avg=62838.08, stdev=471158.45 00:20:53.496 clat (usec): min=2848, max=15296, avg=8121.36, stdev=2200.40 00:20:53.496 lat (usec): min=2855, max=15302, avg=8184.20, stdev=2217.12 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 6587], 00:20:53.496 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:20:53.496 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11600], 95.00th=[12780], 00:20:53.496 | 99.00th=[13960], 99.50th=[14222], 99.90th=[15270], 99.95th=[15270], 00:20:53.496 | 99.99th=[15270] 00:20:53.496 write: IOPS=8565, BW=33.5MiB/s (35.1MB/s)(33.6MiB/1004msec); 0 zone resets 00:20:53.496 slat (nsec): min=1640, max=6341.0k, avg=52451.24, stdev=312668.23 00:20:53.496 clat (usec): min=1270, max=14396, avg=7079.20, stdev=1738.62 00:20:53.496 lat (usec): min=1510, max=14399, avg=7131.66, stdev=1735.02 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 2540], 5.00th=[ 3687], 10.00th=[ 4817], 20.00th=[ 5735], 00:20:53.496 | 30.00th=[ 6456], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7767], 00:20:53.496 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 9765], 00:20:53.496 | 99.00th=[11338], 99.50th=[12649], 99.90th=[14091], 99.95th=[14222], 00:20:53.496 | 99.99th=[14353] 00:20:53.496 bw ( KiB/s): min=33784, max=33992, per=29.37%, avg=33888.00, stdev=147.08, samples=2 00:20:53.496 iops : min= 8446, max= 8498, avg=8472.00, stdev=36.77, samples=2 00:20:53.496 lat (msec) : 2=0.13%, 4=4.10%, 10=85.52%, 20=10.25% 00:20:53.496 cpu : usr=5.28%, sys=7.07%, ctx=786, majf=0, minf=1 00:20:53.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:53.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:53.496 issued rwts: total=8192,8600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:53.496 job2: (groupid=0, jobs=1): err= 0: pid=2855396: Fri Jun 7 23:17:15 2024 00:20:53.496 read: IOPS=6146, BW=24.0MiB/s (25.2MB/s)(24.1MiB/1003msec) 00:20:53.496 slat (nsec): min=899, max=17144k, avg=80063.82, stdev=569565.89 00:20:53.496 clat (usec): min=1311, max=37753, avg=10187.13, stdev=2763.83 00:20:53.496 lat (usec): min=5357, max=37779, avg=10267.20, stdev=2815.85 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 6849], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9110], 00:20:53.496 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:20:53.496 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[11600], 95.00th=[15533], 00:20:53.496 | 99.00th=[24773], 99.50th=[26346], 99.90th=[29230], 99.95th=[29230], 00:20:53.496 | 99.99th=[38011] 00:20:53.496 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:20:53.496 slat (nsec): min=1612, max=10091k, avg=69337.51, stdev=447644.74 00:20:53.496 clat (usec): min=1239, max=29586, avg=9688.36, stdev=2058.87 00:20:53.496 lat (usec): min=1250, max=29591, avg=9757.70, stdev=2093.51 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 8160], 20.00th=[ 8848], 00:20:53.496 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:20:53.496 | 70.00th=[10028], 80.00th=[10290], 90.00th=[11207], 95.00th=[12649], 00:20:53.496 | 99.00th=[17171], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:20:53.496 | 99.99th=[29492] 00:20:53.496 bw ( KiB/s): min=24576, max=27816, per=22.70%, avg=26196.00, stdev=2291.03, samples=2 00:20:53.496 iops : min= 6144, max= 6954, avg=6549.00, stdev=572.76, samples=2 00:20:53.496 lat (msec) : 2=0.09%, 4=0.03%, 10=70.37%, 20=28.02%, 50=1.50% 00:20:53.496 cpu : usr=3.89%, sys=6.29%, ctx=553, majf=0, minf=1 00:20:53.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:53.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:53.496 issued rwts: total=6165,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:53.496 job3: (groupid=0, jobs=1): err= 0: pid=2855397: Fri Jun 7 23:17:15 2024 00:20:53.496 read: IOPS=6866, BW=26.8MiB/s (28.1MB/s)(27.0MiB/1006msec) 00:20:53.496 slat (nsec): min=973, max=8000.2k, avg=75986.19, stdev=550579.98 00:20:53.496 clat (usec): min=1222, max=21183, avg=9622.09, stdev=2444.47 00:20:53.496 lat (usec): min=3010, max=21209, avg=9698.07, stdev=2459.48 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 4146], 5.00th=[ 6194], 10.00th=[ 7308], 20.00th=[ 8029], 00:20:53.496 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:20:53.496 | 70.00th=[10028], 80.00th=[11207], 90.00th=[13304], 95.00th=[14746], 00:20:53.496 | 99.00th=[16188], 99.50th=[16712], 99.90th=[18220], 99.95th=[18220], 00:20:53.496 | 99.99th=[21103] 00:20:53.496 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:20:53.496 slat (nsec): min=1684, max=12971k, avg=62029.04, stdev=419751.28 00:20:53.496 clat (usec): min=1928, max=25515, avg=8466.12, stdev=2972.69 00:20:53.496 lat (usec): min=1935, max=25528, avg=8528.14, stdev=2988.94 00:20:53.496 clat percentiles (usec): 00:20:53.496 | 1.00th=[ 2835], 5.00th=[ 3785], 10.00th=[ 5145], 20.00th=[ 6390], 00:20:53.496 | 30.00th=[ 7177], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 8848], 00:20:53.496 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[11994], 95.00th=[12780], 00:20:53.496 | 99.00th=[19006], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:20:53.496 | 99.99th=[25560] 00:20:53.496 bw ( KiB/s): min=28656, max=28688, per=24.85%, avg=28672.00, stdev=22.63, samples=2 00:20:53.496 iops : min= 7164, max= 7172, avg=7168.00, stdev= 5.66, samples=2 00:20:53.496 lat (msec) : 2=0.05%, 4=3.08%, 10=73.55%, 20=22.93%, 50=0.40% 00:20:53.496 cpu : usr=5.37%, sys=6.57%, ctx=683, majf=0, minf=1 00:20:53.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:53.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:53.496 issued rwts: total=6908,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:53.496 00:20:53.496 Run status group 0 (all jobs): 00:20:53.496 READ: bw=106MiB/s (112MB/s), 24.0MiB/s-31.9MiB/s (25.2MB/s-33.4MB/s), io=107MiB (112MB), run=1003-1008msec 00:20:53.496 WRITE: bw=113MiB/s (118MB/s), 25.8MiB/s-33.5MiB/s (27.0MB/s-35.1MB/s), io=114MiB (119MB), run=1003-1008msec 00:20:53.496 00:20:53.496 Disk stats (read/write): 00:20:53.496 nvme0n1: ios=5162/5615, merge=0/0, ticks=52987/48052, in_queue=101039, util=87.17% 00:20:53.496 nvme0n2: ios=6826/7168, merge=0/0, ticks=53580/48839, in_queue=102419, util=88.80% 00:20:53.496 nvme0n3: ios=5139/5491, merge=0/0, ticks=30940/28965, in_queue=59905, util=92.62% 00:20:53.496 nvme0n4: ios=5683/6031, merge=0/0, ticks=51948/49587, in_queue=101535, util=97.23% 00:20:53.496 23:17:15 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:53.496 [global] 00:20:53.496 thread=1 00:20:53.496 invalidate=1 00:20:53.496 rw=randwrite 00:20:53.496 time_based=1 00:20:53.496 runtime=1 00:20:53.496 ioengine=libaio 00:20:53.496 direct=1 00:20:53.496 bs=4096 00:20:53.496 iodepth=128 00:20:53.496 norandommap=0 00:20:53.496 numjobs=1 00:20:53.496 00:20:53.496 verify_dump=1 00:20:53.496 verify_backlog=512 00:20:53.496 verify_state_save=0 00:20:53.496 do_verify=1 00:20:53.496 verify=crc32c-intel 00:20:53.496 [job0] 00:20:53.496 filename=/dev/nvme0n1 00:20:53.496 [job1] 00:20:53.496 filename=/dev/nvme0n2 00:20:53.496 [job2] 00:20:53.496 filename=/dev/nvme0n3 00:20:53.496 [job3] 00:20:53.496 filename=/dev/nvme0n4 00:20:53.496 Could not set queue depth (nvme0n1) 00:20:53.496 Could not set queue depth (nvme0n2) 00:20:53.496 Could not set queue depth (nvme0n3) 00:20:53.496 Could not set queue depth (nvme0n4) 00:20:53.754 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.754 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.754 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.754 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.754 fio-3.35 00:20:53.754 Starting 4 threads 00:20:55.156 00:20:55.156 job0: (groupid=0, jobs=1): err= 0: pid=2855920: Fri Jun 7 23:17:17 2024 00:20:55.156 read: IOPS=5945, BW=23.2MiB/s (24.4MB/s)(23.3MiB/1004msec) 00:20:55.156 slat (nsec): min=915, max=8291.2k, avg=79054.38, stdev=484485.02 00:20:55.156 clat (usec): min=874, max=17134, avg=10411.66, stdev=1600.71 00:20:55.156 lat (usec): min=3084, max=17527, avg=10490.72, stdev=1650.47 00:20:55.156 clat percentiles (usec): 00:20:55.156 | 1.00th=[ 5473], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9503], 00:20:55.156 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:20:55.156 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[13173], 00:20:55.156 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15401], 99.95th=[15664], 00:20:55.156 | 99.99th=[17171] 00:20:55.156 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:20:55.156 slat (nsec): min=1578, max=14482k, avg=77791.85, stdev=479701.25 00:20:55.156 clat (usec): min=1207, max=27285, avg=10610.28, stdev=3284.31 00:20:55.156 lat (usec): min=1217, max=27295, avg=10688.07, stdev=3310.21 00:20:55.156 clat percentiles (usec): 00:20:55.156 | 1.00th=[ 4146], 5.00th=[ 5604], 10.00th=[ 6521], 20.00th=[ 8979], 00:20:55.156 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[10945], 00:20:55.156 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13829], 95.00th=[17433], 00:20:55.156 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:20:55.156 | 99.99th=[27395] 00:20:55.156 bw ( KiB/s): min=24576, max=24576, per=24.21%, avg=24576.00, stdev= 0.00, samples=2 00:20:55.156 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:20:55.156 lat (usec) : 1000=0.01% 00:20:55.156 lat (msec) : 2=0.02%, 4=0.44%, 10=36.82%, 20=61.84%, 50=0.88% 00:20:55.156 cpu : usr=4.29%, sys=5.38%, ctx=588, majf=0, minf=1 00:20:55.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:55.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.156 issued rwts: total=5969,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.156 job1: (groupid=0, jobs=1): err= 0: pid=2855922: Fri Jun 7 23:17:17 2024 00:20:55.156 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:20:55.156 slat (nsec): min=985, max=27682k, avg=77051.28, stdev=640555.30 00:20:55.156 clat (usec): min=1934, max=75678, avg=10689.83, stdev=8062.22 00:20:55.156 lat (usec): min=3286, max=75725, avg=10766.88, stdev=8109.82 00:20:55.156 clat percentiles (usec): 00:20:55.156 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7111], 00:20:55.156 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9241], 00:20:55.156 | 70.00th=[10421], 80.00th=[11731], 90.00th=[16581], 95.00th=[19268], 00:20:55.156 | 99.00th=[64226], 99.50th=[64226], 99.90th=[68682], 99.95th=[68682], 00:20:55.156 | 99.99th=[76022] 00:20:55.156 write: IOPS=7092, BW=27.7MiB/s (29.1MB/s)(27.9MiB/1007msec); 0 zone resets 00:20:55.156 slat (nsec): min=1598, max=13722k, avg=62770.70, stdev=478375.77 00:20:55.156 clat (usec): min=990, max=28254, avg=7916.46, stdev=3543.51 00:20:55.156 lat (usec): min=1323, max=28262, avg=7979.24, stdev=3552.98 00:20:55.156 clat percentiles (usec): 00:20:55.156 | 1.00th=[ 2638], 5.00th=[ 4178], 10.00th=[ 4948], 20.00th=[ 5538], 00:20:55.156 | 30.00th=[ 6259], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[ 7963], 00:20:55.156 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[10945], 95.00th=[15139], 00:20:55.156 | 99.00th=[25035], 99.50th=[26870], 99.90th=[27919], 99.95th=[27919], 00:20:55.156 | 99.99th=[28181] 00:20:55.156 bw ( KiB/s): min=24576, max=31544, per=27.65%, avg=28060.00, stdev=4927.12, samples=2 00:20:55.156 iops : min= 6144, max= 7886, avg=7015.00, stdev=1231.78, samples=2 00:20:55.156 lat (usec) : 1000=0.01% 00:20:55.156 lat (msec) : 2=0.17%, 4=2.12%, 10=75.87%, 20=18.63%, 50=2.51% 00:20:55.156 lat (msec) : 100=0.70% 00:20:55.156 cpu : usr=4.37%, sys=7.06%, ctx=451, majf=0, minf=1 00:20:55.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:55.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.156 issued rwts: total=6656,7142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.156 job2: (groupid=0, jobs=1): err= 0: pid=2855923: Fri Jun 7 23:17:17 2024 00:20:55.157 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:20:55.157 slat (nsec): min=955, max=11955k, avg=88452.15, stdev=627158.32 00:20:55.157 clat (usec): min=1234, max=65129, avg=11078.32, stdev=6162.37 00:20:55.157 lat (usec): min=1239, max=65137, avg=11166.77, stdev=6240.19 00:20:55.157 clat percentiles (usec): 00:20:55.157 | 1.00th=[ 2245], 5.00th=[ 4686], 10.00th=[ 7111], 20.00th=[ 8717], 00:20:55.157 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:20:55.157 | 70.00th=[11076], 80.00th=[12256], 90.00th=[16909], 95.00th=[19006], 00:20:55.157 | 99.00th=[42206], 99.50th=[57410], 99.90th=[61604], 99.95th=[65274], 00:20:55.157 | 99.99th=[65274] 00:20:55.157 write: IOPS=5269, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:20:55.157 slat (nsec): min=1568, max=9271.5k, avg=88565.22, stdev=492562.20 00:20:55.157 clat (usec): min=533, max=65108, avg=13337.38, stdev=12253.61 00:20:55.157 lat (usec): min=548, max=65112, avg=13425.95, stdev=12324.71 00:20:55.157 clat percentiles (usec): 00:20:55.157 | 1.00th=[ 889], 5.00th=[ 2245], 10.00th=[ 4359], 20.00th=[ 8455], 00:20:55.157 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:20:55.157 | 70.00th=[10421], 80.00th=[12649], 90.00th=[35390], 95.00th=[47973], 00:20:55.157 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:20:55.157 | 99.99th=[65274] 00:20:55.157 bw ( KiB/s): min=19504, max=21680, per=20.29%, avg=20592.00, stdev=1538.66, samples=2 00:20:55.157 iops : min= 4876, max= 5420, avg=5148.00, stdev=384.67, samples=2 00:20:55.157 lat (usec) : 750=0.30%, 1000=0.58% 00:20:55.157 lat (msec) : 2=1.39%, 4=3.75%, 10=52.60%, 20=32.92%, 50=6.52% 00:20:55.157 lat (msec) : 100=1.94% 00:20:55.157 cpu : usr=3.80%, sys=5.10%, ctx=522, majf=0, minf=1 00:20:55.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:55.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.157 issued rwts: total=5120,5275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.157 job3: (groupid=0, jobs=1): err= 0: pid=2855925: Fri Jun 7 23:17:17 2024 00:20:55.157 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:20:55.157 slat (nsec): min=926, max=8888.8k, avg=74133.64, stdev=497895.76 00:20:55.157 clat (usec): min=3339, max=21172, avg=9487.82, stdev=2363.06 00:20:55.157 lat (usec): min=3342, max=21180, avg=9561.95, stdev=2395.73 00:20:55.157 clat percentiles (usec): 00:20:55.157 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8029], 00:20:55.157 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:20:55.157 | 70.00th=[ 9634], 80.00th=[11076], 90.00th=[13042], 95.00th=[14877], 00:20:55.157 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19792], 00:20:55.157 | 99.99th=[21103] 00:20:55.157 write: IOPS=6941, BW=27.1MiB/s (28.4MB/s)(27.3MiB/1007msec); 0 zone resets 00:20:55.157 slat (nsec): min=1538, max=17625k, avg=68635.75, stdev=493280.38 00:20:55.157 clat (usec): min=1049, max=31941, avg=9240.09, stdev=3955.08 00:20:55.157 lat (usec): min=1058, max=31950, avg=9308.72, stdev=3980.56 00:20:55.157 clat percentiles (usec): 00:20:55.157 | 1.00th=[ 2966], 5.00th=[ 4621], 10.00th=[ 5669], 20.00th=[ 7373], 00:20:55.157 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:20:55.157 | 70.00th=[ 8979], 80.00th=[10814], 90.00th=[12387], 95.00th=[17695], 00:20:55.157 | 99.00th=[26608], 99.50th=[27395], 99.90th=[31851], 99.95th=[31851], 00:20:55.157 | 99.99th=[31851] 00:20:55.157 bw ( KiB/s): min=24576, max=30328, per=27.05%, avg=27452.00, stdev=4067.28, samples=2 00:20:55.157 iops : min= 6144, max= 7582, avg=6863.00, stdev=1016.82, samples=2 00:20:55.157 lat (msec) : 2=0.07%, 4=1.77%, 10=72.67%, 20=24.09%, 50=1.41% 00:20:55.157 cpu : usr=3.58%, sys=5.96%, ctx=731, majf=0, minf=1 00:20:55.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:55.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:55.157 issued rwts: total=6656,6990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:55.157 00:20:55.157 Run status group 0 (all jobs): 00:20:55.157 READ: bw=94.7MiB/s (99.3MB/s), 20.0MiB/s-25.8MiB/s (20.9MB/s-27.1MB/s), io=95.3MiB (99.9MB), run=1001-1007msec 00:20:55.157 WRITE: bw=99.1MiB/s (104MB/s), 20.6MiB/s-27.7MiB/s (21.6MB/s-29.1MB/s), io=99.8MiB (105MB), run=1001-1007msec 00:20:55.157 00:20:55.157 Disk stats (read/write): 00:20:55.157 nvme0n1: ios=5165/5139, merge=0/0, ticks=25688/28464, in_queue=54152, util=86.87% 00:20:55.157 nvme0n2: ios=5450/5632, merge=0/0, ticks=49476/42372, in_queue=91848, util=87.77% 00:20:55.157 nvme0n3: ios=4153/4100, merge=0/0, ticks=32925/46535, in_queue=79460, util=92.62% 00:20:55.157 nvme0n4: ios=5433/5632, merge=0/0, ticks=28316/31867, in_queue=60183, util=96.27% 00:20:55.157 23:17:17 -- target/fio.sh@55 -- # sync 00:20:55.157 23:17:17 -- target/fio.sh@59 -- # fio_pid=2856262 00:20:55.157 23:17:17 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:55.157 23:17:17 -- target/fio.sh@61 -- # sleep 3 00:20:55.157 [global] 00:20:55.157 thread=1 00:20:55.157 invalidate=1 00:20:55.157 rw=read 00:20:55.157 time_based=1 00:20:55.157 runtime=10 00:20:55.157 ioengine=libaio 00:20:55.157 direct=1 00:20:55.157 bs=4096 00:20:55.157 iodepth=1 00:20:55.157 norandommap=1 00:20:55.157 numjobs=1 00:20:55.157 00:20:55.157 [job0] 00:20:55.157 filename=/dev/nvme0n1 00:20:55.157 [job1] 00:20:55.157 filename=/dev/nvme0n2 00:20:55.157 [job2] 00:20:55.157 filename=/dev/nvme0n3 00:20:55.157 [job3] 00:20:55.157 filename=/dev/nvme0n4 00:20:55.157 Could not set queue depth (nvme0n1) 00:20:55.157 Could not set queue depth (nvme0n2) 00:20:55.157 Could not set queue depth (nvme0n3) 00:20:55.157 Could not set queue depth (nvme0n4) 00:20:55.418 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.418 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.418 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.418 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:55.418 fio-3.35 00:20:55.418 Starting 4 threads 00:20:57.949 23:17:20 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:57.949 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10620928, buflen=4096 00:20:57.949 fio: pid=2856455, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:58.207 23:17:20 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:58.207 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=13320192, buflen=4096 00:20:58.207 fio: pid=2856454, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:58.207 23:17:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.207 23:17:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:58.465 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15679488, buflen=4096 00:20:58.465 fio: pid=2856450, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:58.465 23:17:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.465 23:17:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:58.465 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10731520, buflen=4096 00:20:58.465 fio: pid=2856453, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:58.465 23:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.465 23:17:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:58.724 00:20:58.724 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2856450: Fri Jun 7 23:17:21 2024 00:20:58.724 read: IOPS=1352, BW=5409KiB/s (5538kB/s)(15.0MiB/2831msec) 00:20:58.724 slat (usec): min=5, max=15246, avg=31.24, stdev=341.40 00:20:58.724 clat (usec): min=223, max=1011, avg=702.72, stdev=118.18 00:20:58.724 lat (usec): min=229, max=15918, avg=733.96, stdev=362.25 00:20:58.724 clat percentiles (usec): 00:20:58.724 | 1.00th=[ 392], 5.00th=[ 498], 10.00th=[ 553], 20.00th=[ 603], 00:20:58.724 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 742], 00:20:58.724 | 70.00th=[ 766], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 889], 00:20:58.724 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1012], 00:20:58.724 | 99.99th=[ 1012] 00:20:58.724 bw ( KiB/s): min= 5240, max= 5616, per=33.59%, avg=5464.00, stdev=162.68, samples=5 00:20:58.724 iops : min= 1310, max= 1404, avg=1366.00, stdev=40.67, samples=5 00:20:58.724 lat (usec) : 250=0.10%, 500=4.99%, 750=58.61%, 1000=36.17% 00:20:58.724 lat (msec) : 2=0.10% 00:20:58.724 cpu : usr=1.80%, sys=5.09%, ctx=3831, majf=0, minf=1 00:20:58.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:58.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.724 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.724 issued rwts: total=3829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:58.724 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2856453: Fri Jun 7 23:17:21 2024 00:20:58.724 read: IOPS=867, BW=3467KiB/s (3550kB/s)(10.2MiB/3023msec) 00:20:58.724 slat (usec): min=6, max=15020, avg=39.35, stdev=451.44 00:20:58.724 clat (usec): min=290, max=42531, avg=1107.45, stdev=1403.33 00:20:58.724 lat (usec): min=314, max=51971, avg=1146.80, stdev=1572.75 00:20:58.724 clat percentiles (usec): 00:20:58.724 | 1.00th=[ 635], 5.00th=[ 807], 10.00th=[ 889], 20.00th=[ 979], 00:20:58.724 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:20:58.724 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1237], 00:20:58.724 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[42206], 99.95th=[42206], 00:20:58.724 | 99.99th=[42730] 00:20:58.724 bw ( KiB/s): min= 3608, max= 3824, per=22.57%, avg=3672.00, stdev=90.69, samples=5 00:20:58.724 iops : min= 902, max= 956, avg=918.00, stdev=22.67, samples=5 00:20:58.724 lat (usec) : 500=0.34%, 750=2.52%, 1000=21.37% 00:20:58.724 lat (msec) : 2=75.54%, 4=0.04%, 10=0.04%, 50=0.11% 00:20:58.724 cpu : usr=0.93%, sys=2.51%, ctx=2626, majf=0, minf=1 00:20:58.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:58.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.724 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.725 issued rwts: total=2621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:58.725 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2856454: Fri Jun 7 23:17:21 2024 00:20:58.725 read: IOPS=1219, BW=4877KiB/s (4994kB/s)(12.7MiB/2667msec) 00:20:58.725 slat (nsec): min=6459, max=58777, avg=23204.76, stdev=6021.13 00:20:58.725 clat (usec): min=385, max=3877, avg=790.62, stdev=136.13 00:20:58.725 lat (usec): min=410, max=3901, avg=813.82, stdev=137.24 00:20:58.725 clat percentiles (usec): 00:20:58.725 | 1.00th=[ 490], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 693], 00:20:58.725 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 807], 60.00th=[ 848], 00:20:58.725 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 914], 95.00th=[ 938], 00:20:58.725 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 2008], 99.95th=[ 3064], 00:20:58.725 | 99.99th=[ 3884] 00:20:58.725 bw ( KiB/s): min= 4848, max= 4912, per=30.03%, avg=4884.80, stdev=25.67, samples=5 00:20:58.725 iops : min= 1212, max= 1228, avg=1221.20, stdev= 6.42, samples=5 00:20:58.725 lat (usec) : 500=1.35%, 750=34.34%, 1000=63.26% 00:20:58.725 lat (msec) : 2=0.89%, 4=0.12% 00:20:58.725 cpu : usr=1.28%, sys=3.30%, ctx=3253, majf=0, minf=1 00:20:58.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:58.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.725 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.725 issued rwts: total=3253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:58.725 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2856455: Fri Jun 7 23:17:21 2024 00:20:58.725 read: IOPS=1038, BW=4152KiB/s (4252kB/s)(10.1MiB/2498msec) 00:20:58.725 slat (nsec): min=6162, max=62264, avg=26610.50, stdev=2946.10 00:20:58.725 clat (usec): min=420, max=1317, avg=930.05, stdev=104.97 00:20:58.725 lat (usec): min=447, max=1348, avg=956.66, stdev=105.06 00:20:58.725 clat percentiles (usec): 00:20:58.725 | 1.00th=[ 594], 5.00th=[ 717], 10.00th=[ 791], 20.00th=[ 857], 00:20:58.725 | 30.00th=[ 906], 40.00th=[ 930], 50.00th=[ 955], 60.00th=[ 971], 00:20:58.725 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:20:58.725 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1205], 00:20:58.725 | 99.99th=[ 1319] 00:20:58.725 bw ( KiB/s): min= 4120, max= 4288, per=25.62%, avg=4168.00, stdev=80.80, samples=4 00:20:58.725 iops : min= 1030, max= 1072, avg=1042.00, stdev=20.20, samples=4 00:20:58.725 lat (usec) : 500=0.31%, 750=6.67%, 1000=70.43% 00:20:58.725 lat (msec) : 2=22.55% 00:20:58.725 cpu : usr=1.72%, sys=4.33%, ctx=2594, majf=0, minf=2 00:20:58.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:58.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.725 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.725 issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:58.725 00:20:58.725 Run status group 0 (all jobs): 00:20:58.725 READ: bw=15.9MiB/s (16.7MB/s), 3467KiB/s-5409KiB/s (3550kB/s-5538kB/s), io=48.0MiB (50.4MB), run=2498-3023msec 00:20:58.725 00:20:58.725 Disk stats (read/write): 00:20:58.725 nvme0n1: ios=3775/0, merge=0/0, ticks=2299/0, in_queue=2299, util=92.29% 00:20:58.725 nvme0n2: ios=2559/0, merge=0/0, ticks=2657/0, in_queue=2657, util=94.17% 00:20:58.725 nvme0n3: ios=3100/0, merge=0/0, ticks=2356/0, in_queue=2356, util=95.64% 00:20:58.725 nvme0n4: ios=2379/0, merge=0/0, ticks=2031/0, in_queue=2031, util=95.98% 00:20:58.725 23:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.725 23:17:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:58.984 23:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.984 23:17:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:58.984 23:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:58.984 23:17:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:59.242 23:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:59.242 23:17:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:59.501 23:17:21 -- target/fio.sh@69 -- # fio_status=0 00:20:59.501 23:17:21 -- target/fio.sh@70 -- # wait 2856262 00:20:59.501 23:17:21 -- target/fio.sh@70 -- # fio_status=4 00:20:59.501 23:17:21 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:59.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.501 23:17:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:59.501 23:17:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:59.501 23:17:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:59.501 23:17:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.501 23:17:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:59.501 23:17:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.501 23:17:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:59.501 23:17:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:59.501 23:17:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:59.501 nvmf hotplug test: fio failed as expected 00:20:59.501 23:17:22 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.759 23:17:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:59.759 23:17:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:59.759 23:17:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:59.759 23:17:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:59.759 23:17:22 -- target/fio.sh@91 -- # nvmftestfini 00:20:59.759 23:17:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.759 23:17:22 -- nvmf/common.sh@116 -- # sync 00:20:59.759 23:17:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.759 23:17:22 -- nvmf/common.sh@119 -- # set +e 00:20:59.759 23:17:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.759 23:17:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.759 rmmod nvme_tcp 00:20:59.759 rmmod nvme_fabrics 00:20:59.759 rmmod nvme_keyring 00:20:59.759 23:17:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.759 23:17:22 -- nvmf/common.sh@123 -- # set -e 00:20:59.759 23:17:22 -- nvmf/common.sh@124 -- # return 0 00:20:59.759 23:17:22 -- nvmf/common.sh@477 -- # '[' -n 2852723 ']' 00:20:59.759 23:17:22 -- nvmf/common.sh@478 -- # killprocess 2852723 00:20:59.759 23:17:22 -- common/autotest_common.sh@926 -- # '[' -z 2852723 ']' 00:20:59.759 23:17:22 -- common/autotest_common.sh@930 -- # kill -0 2852723 00:20:59.759 23:17:22 -- common/autotest_common.sh@931 -- # uname 00:20:59.759 23:17:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.759 23:17:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2852723 00:20:59.759 23:17:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.759 23:17:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.759 23:17:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2852723' 00:20:59.759 killing process with pid 2852723 00:20:59.759 23:17:22 -- common/autotest_common.sh@945 -- # kill 2852723 00:20:59.759 23:17:22 -- common/autotest_common.sh@950 -- # wait 2852723 00:21:00.019 23:17:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:00.019 23:17:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:00.019 23:17:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:00.019 23:17:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.019 23:17:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:00.019 23:17:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.019 23:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.019 23:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.924 23:17:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:01.924 00:21:01.924 real 0m27.725s 00:21:01.924 user 2m35.775s 00:21:01.924 sys 0m9.146s 00:21:01.924 23:17:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.924 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:01.924 ************************************ 00:21:01.924 END TEST nvmf_fio_target 00:21:01.924 ************************************ 00:21:01.924 23:17:24 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:01.924 23:17:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:01.924 23:17:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.924 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:02.185 ************************************ 00:21:02.185 START TEST nvmf_bdevio 00:21:02.185 ************************************ 00:21:02.185 23:17:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:02.185 * Looking for test storage... 00:21:02.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.185 23:17:24 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.185 23:17:24 -- nvmf/common.sh@7 -- # uname -s 00:21:02.185 23:17:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.185 23:17:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.185 23:17:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.185 23:17:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.185 23:17:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.185 23:17:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.185 23:17:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.185 23:17:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.185 23:17:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.185 23:17:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.185 23:17:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.185 23:17:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.185 23:17:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.185 23:17:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.185 23:17:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.185 23:17:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.185 23:17:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.185 23:17:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.185 23:17:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.185 23:17:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.185 23:17:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.185 23:17:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.185 23:17:24 -- paths/export.sh@5 -- # export PATH 00:21:02.185 23:17:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.185 23:17:24 -- nvmf/common.sh@46 -- # : 0 00:21:02.185 23:17:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:02.185 23:17:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:02.185 23:17:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:02.185 23:17:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.185 23:17:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.185 23:17:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:02.185 23:17:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:02.185 23:17:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:02.185 23:17:24 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.185 23:17:24 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.185 23:17:24 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:02.185 23:17:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:02.185 23:17:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.185 23:17:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:02.185 23:17:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:02.185 23:17:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:02.185 23:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.185 23:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.185 23:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.185 23:17:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:02.185 23:17:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:02.185 23:17:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:02.185 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 23:17:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:10.323 23:17:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:10.323 23:17:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:10.323 23:17:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:10.323 23:17:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:10.323 23:17:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:10.323 23:17:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:10.323 23:17:31 -- nvmf/common.sh@294 -- # net_devs=() 00:21:10.323 23:17:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:10.323 23:17:31 -- nvmf/common.sh@295 -- # e810=() 00:21:10.323 23:17:31 -- nvmf/common.sh@295 -- # local -ga e810 00:21:10.323 23:17:31 -- nvmf/common.sh@296 -- # x722=() 00:21:10.323 23:17:31 -- nvmf/common.sh@296 -- # local -ga x722 00:21:10.323 23:17:31 -- nvmf/common.sh@297 -- # mlx=() 00:21:10.323 23:17:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:10.323 23:17:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.323 23:17:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:10.323 23:17:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:10.323 23:17:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.323 23:17:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:10.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:10.323 23:17:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:10.323 23:17:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:10.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:10.323 23:17:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.323 23:17:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.323 23:17:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.323 23:17:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:10.323 Found net devices under 0000:31:00.0: cvl_0_0 00:21:10.323 23:17:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.323 23:17:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:10.323 23:17:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.323 23:17:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.323 23:17:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:10.323 Found net devices under 0000:31:00.1: cvl_0_1 00:21:10.323 23:17:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.323 23:17:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:10.323 23:17:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:10.323 23:17:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:10.323 23:17:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.323 23:17:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.323 23:17:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.323 23:17:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:10.323 23:17:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.323 23:17:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.323 23:17:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:10.323 23:17:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.323 23:17:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.323 23:17:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:10.323 23:17:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:10.323 23:17:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.323 23:17:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.323 23:17:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.323 23:17:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.323 23:17:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:10.323 23:17:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.324 23:17:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.324 23:17:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.324 23:17:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:10.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:21:10.324 00:21:10.324 --- 10.0.0.2 ping statistics --- 00:21:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.324 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:21:10.324 23:17:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:21:10.324 00:21:10.324 --- 10.0.0.1 ping statistics --- 00:21:10.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.324 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:21:10.324 23:17:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.324 23:17:32 -- nvmf/common.sh@410 -- # return 0 00:21:10.324 23:17:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:10.324 23:17:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.324 23:17:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:10.324 23:17:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:10.324 23:17:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.324 23:17:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:10.324 23:17:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:10.324 23:17:32 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:10.324 23:17:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:10.324 23:17:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:10.324 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:10.324 23:17:32 -- nvmf/common.sh@469 -- # nvmfpid=2861563 00:21:10.324 23:17:32 -- nvmf/common.sh@470 -- # waitforlisten 2861563 00:21:10.324 23:17:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:10.324 23:17:32 -- common/autotest_common.sh@819 -- # '[' -z 2861563 ']' 00:21:10.324 23:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.324 23:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:10.324 23:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.324 23:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:10.324 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:10.324 [2024-06-07 23:17:32.147335] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:10.324 [2024-06-07 23:17:32.147397] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.324 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.324 [2024-06-07 23:17:32.237198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.324 [2024-06-07 23:17:32.284322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.324 [2024-06-07 23:17:32.284483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.324 [2024-06-07 23:17:32.284491] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.324 [2024-06-07 23:17:32.284499] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.324 [2024-06-07 23:17:32.284689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.324 [2024-06-07 23:17:32.284863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:10.324 [2024-06-07 23:17:32.285015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.324 [2024-06-07 23:17:32.285015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:10.324 23:17:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.324 23:17:32 -- common/autotest_common.sh@852 -- # return 0 00:21:10.324 23:17:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.324 23:17:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.324 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:10.324 23:17:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.324 23:17:32 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.324 23:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.324 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:10.324 [2024-06-07 23:17:32.989929] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.324 23:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.324 23:17:32 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.324 23:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.324 23:17:32 -- common/autotest_common.sh@10 -- # set +x 00:21:10.580 Malloc0 00:21:10.580 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.580 23:17:33 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.580 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.580 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.580 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.580 23:17:33 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.580 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.580 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.580 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.580 23:17:33 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.580 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.580 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:21:10.580 [2024-06-07 23:17:33.055021] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.580 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.580 23:17:33 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:10.580 23:17:33 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:10.580 23:17:33 -- nvmf/common.sh@520 -- # config=() 00:21:10.580 23:17:33 -- nvmf/common.sh@520 -- # local subsystem config 00:21:10.580 23:17:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:10.580 23:17:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:10.580 { 00:21:10.580 "params": { 00:21:10.580 "name": "Nvme$subsystem", 00:21:10.580 "trtype": "$TEST_TRANSPORT", 00:21:10.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.580 "adrfam": "ipv4", 00:21:10.580 "trsvcid": "$NVMF_PORT", 00:21:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.580 "hdgst": ${hdgst:-false}, 00:21:10.580 "ddgst": ${ddgst:-false} 00:21:10.580 }, 00:21:10.580 "method": "bdev_nvme_attach_controller" 00:21:10.580 } 00:21:10.580 EOF 00:21:10.580 )") 00:21:10.580 23:17:33 -- nvmf/common.sh@542 -- # cat 00:21:10.580 23:17:33 -- nvmf/common.sh@544 -- # jq . 00:21:10.580 23:17:33 -- nvmf/common.sh@545 -- # IFS=, 00:21:10.580 23:17:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:10.580 "params": { 00:21:10.580 "name": "Nvme1", 00:21:10.580 "trtype": "tcp", 00:21:10.580 "traddr": "10.0.0.2", 00:21:10.580 "adrfam": "ipv4", 00:21:10.580 "trsvcid": "4420", 00:21:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.580 "hdgst": false, 00:21:10.580 "ddgst": false 00:21:10.580 }, 00:21:10.580 "method": "bdev_nvme_attach_controller" 00:21:10.580 }' 00:21:10.580 [2024-06-07 23:17:33.108609] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:10.580 [2024-06-07 23:17:33.108682] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861865 ] 00:21:10.580 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.580 [2024-06-07 23:17:33.175655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.580 [2024-06-07 23:17:33.213629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.580 [2024-06-07 23:17:33.213775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.580 [2024-06-07 23:17:33.213777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.835 [2024-06-07 23:17:33.466839] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:10.835 [2024-06-07 23:17:33.466870] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:10.835 I/O targets: 00:21:10.835 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:10.835 00:21:10.835 00:21:10.835 CUnit - A unit testing framework for C - Version 2.1-3 00:21:10.835 http://cunit.sourceforge.net/ 00:21:10.835 00:21:10.835 00:21:10.835 Suite: bdevio tests on: Nvme1n1 00:21:11.091 Test: blockdev write read block ...passed 00:21:11.091 Test: blockdev write zeroes read block ...passed 00:21:11.091 Test: blockdev write zeroes read no split ...passed 00:21:11.091 Test: blockdev write zeroes read split ...passed 00:21:11.091 Test: blockdev write zeroes read split partial ...passed 00:21:11.091 Test: blockdev reset ...[2024-06-07 23:17:33.597600] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.091 [2024-06-07 23:17:33.597651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dd290 (9): Bad file descriptor 00:21:11.091 [2024-06-07 23:17:33.655175] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.091 passed 00:21:11.091 Test: blockdev write read 8 blocks ...passed 00:21:11.091 Test: blockdev write read size > 128k ...passed 00:21:11.091 Test: blockdev write read invalid size ...passed 00:21:11.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.091 Test: blockdev write read max offset ...passed 00:21:11.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.347 Test: blockdev writev readv 8 blocks ...passed 00:21:11.347 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.347 Test: blockdev writev readv block ...passed 00:21:11.347 Test: blockdev writev readv size > 128k ...passed 00:21:11.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.347 Test: blockdev comparev and writev ...[2024-06-07 23:17:33.879610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.879634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.879645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.879651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.880136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.880145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.880154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.880159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.880705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.880712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.880721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.880726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.881245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.881252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.881261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:11.347 [2024-06-07 23:17:33.881266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:11.347 passed 00:21:11.347 Test: blockdev nvme passthru rw ...passed 00:21:11.347 Test: blockdev nvme passthru vendor specific ...[2024-06-07 23:17:33.966188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.347 [2024-06-07 23:17:33.966198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:11.347 [2024-06-07 23:17:33.966584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.347 [2024-06-07 23:17:33.966591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:11.348 [2024-06-07 23:17:33.966951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.348 [2024-06-07 23:17:33.966958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:11.348 [2024-06-07 23:17:33.967356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:11.348 [2024-06-07 23:17:33.967363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:11.348 passed 00:21:11.348 Test: blockdev nvme admin passthru ...passed 00:21:11.348 Test: blockdev copy ...passed 00:21:11.348 00:21:11.348 Run Summary: Type Total Ran Passed Failed Inactive 00:21:11.348 suites 1 1 n/a 0 0 00:21:11.348 tests 23 23 23 0 0 00:21:11.348 asserts 152 152 152 0 n/a 00:21:11.348 00:21:11.348 Elapsed time = 1.114 seconds 00:21:11.604 23:17:34 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.604 23:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.604 23:17:34 -- common/autotest_common.sh@10 -- # set +x 00:21:11.604 23:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.604 23:17:34 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:11.604 23:17:34 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:11.604 23:17:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:11.604 23:17:34 -- nvmf/common.sh@116 -- # sync 00:21:11.604 23:17:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:11.604 23:17:34 -- nvmf/common.sh@119 -- # set +e 00:21:11.604 23:17:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:11.604 23:17:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:11.604 rmmod nvme_tcp 00:21:11.604 rmmod nvme_fabrics 00:21:11.604 rmmod nvme_keyring 00:21:11.604 23:17:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:11.604 23:17:34 -- nvmf/common.sh@123 -- # set -e 00:21:11.604 23:17:34 -- nvmf/common.sh@124 -- # return 0 00:21:11.604 23:17:34 -- nvmf/common.sh@477 -- # '[' -n 2861563 ']' 00:21:11.604 23:17:34 -- nvmf/common.sh@478 -- # killprocess 2861563 00:21:11.604 23:17:34 -- common/autotest_common.sh@926 -- # '[' -z 2861563 ']' 00:21:11.604 23:17:34 -- common/autotest_common.sh@930 -- # kill -0 2861563 00:21:11.604 23:17:34 -- common/autotest_common.sh@931 -- # uname 00:21:11.604 23:17:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.605 23:17:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2861563 00:21:11.605 23:17:34 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:11.605 23:17:34 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:11.605 23:17:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2861563' 00:21:11.605 killing process with pid 2861563 00:21:11.605 23:17:34 -- common/autotest_common.sh@945 -- # kill 2861563 00:21:11.605 23:17:34 -- common/autotest_common.sh@950 -- # wait 2861563 00:21:11.864 23:17:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.864 23:17:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.864 23:17:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.864 23:17:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.864 23:17:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.864 23:17:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.865 23:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.865 23:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.777 23:17:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:13.777 00:21:13.777 real 0m11.843s 00:21:13.777 user 0m12.677s 00:21:13.778 sys 0m5.929s 00:21:13.778 23:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.778 23:17:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.778 ************************************ 00:21:13.778 END TEST nvmf_bdevio 00:21:13.778 ************************************ 00:21:14.040 23:17:36 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:14.040 23:17:36 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:14.040 23:17:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:14.040 23:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:14.040 23:17:36 -- common/autotest_common.sh@10 -- # set +x 00:21:14.040 ************************************ 00:21:14.040 START TEST nvmf_bdevio_no_huge 00:21:14.040 ************************************ 00:21:14.040 23:17:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:14.040 * Looking for test storage... 00:21:14.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.040 23:17:36 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.040 23:17:36 -- nvmf/common.sh@7 -- # uname -s 00:21:14.040 23:17:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.040 23:17:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.040 23:17:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.040 23:17:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.040 23:17:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.040 23:17:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.040 23:17:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.040 23:17:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.040 23:17:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.040 23:17:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.040 23:17:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.040 23:17:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.040 23:17:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.040 23:17:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.040 23:17:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.040 23:17:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.040 23:17:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.040 23:17:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.040 23:17:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.040 23:17:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.040 23:17:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.040 23:17:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.040 23:17:36 -- paths/export.sh@5 -- # export PATH 00:21:14.040 23:17:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.040 23:17:36 -- nvmf/common.sh@46 -- # : 0 00:21:14.040 23:17:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:14.040 23:17:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:14.040 23:17:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:14.040 23:17:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.040 23:17:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.040 23:17:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:14.040 23:17:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:14.040 23:17:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:14.040 23:17:36 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.040 23:17:36 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.040 23:17:36 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:14.040 23:17:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:14.040 23:17:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.040 23:17:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:14.040 23:17:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:14.040 23:17:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:14.040 23:17:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.040 23:17:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.040 23:17:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.040 23:17:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:14.040 23:17:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:14.040 23:17:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:14.040 23:17:36 -- common/autotest_common.sh@10 -- # set +x 00:21:22.183 23:17:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.183 23:17:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:22.183 23:17:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:22.183 23:17:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:22.183 23:17:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:22.183 23:17:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:22.183 23:17:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:22.183 23:17:43 -- nvmf/common.sh@294 -- # net_devs=() 00:21:22.183 23:17:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:22.183 23:17:43 -- nvmf/common.sh@295 -- # e810=() 00:21:22.183 23:17:43 -- nvmf/common.sh@295 -- # local -ga e810 00:21:22.183 23:17:43 -- nvmf/common.sh@296 -- # x722=() 00:21:22.183 23:17:43 -- nvmf/common.sh@296 -- # local -ga x722 00:21:22.183 23:17:43 -- nvmf/common.sh@297 -- # mlx=() 00:21:22.183 23:17:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:22.183 23:17:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.183 23:17:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.183 23:17:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.183 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.183 23:17:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.183 23:17:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.183 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.183 23:17:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.183 23:17:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.183 23:17:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.183 23:17:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.183 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.183 23:17:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.183 23:17:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.183 23:17:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.183 23:17:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.183 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.183 23:17:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:22.183 23:17:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:22.183 23:17:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:22.183 23:17:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.183 23:17:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.183 23:17:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:22.183 23:17:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.183 23:17:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.183 23:17:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:22.183 23:17:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.183 23:17:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.183 23:17:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:22.183 23:17:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:22.183 23:17:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.183 23:17:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.183 23:17:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.183 23:17:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.183 23:17:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:22.183 23:17:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.183 23:17:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.183 23:17:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.183 23:17:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:22.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:21:22.183 00:21:22.183 --- 10.0.0.2 ping statistics --- 00:21:22.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.183 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:21:22.183 23:17:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:21:22.183 00:21:22.183 --- 10.0.0.1 ping statistics --- 00:21:22.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.183 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:21:22.183 23:17:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.183 23:17:43 -- nvmf/common.sh@410 -- # return 0 00:21:22.183 23:17:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:22.183 23:17:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.183 23:17:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:22.184 23:17:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:22.184 23:17:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.184 23:17:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:22.184 23:17:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:22.184 23:17:44 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:22.184 23:17:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.184 23:17:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:22.184 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.184 23:17:44 -- nvmf/common.sh@469 -- # nvmfpid=2866329 00:21:22.184 23:17:44 -- nvmf/common.sh@470 -- # waitforlisten 2866329 00:21:22.184 23:17:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:22.184 23:17:44 -- common/autotest_common.sh@819 -- # '[' -z 2866329 ']' 00:21:22.184 23:17:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.184 23:17:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.184 23:17:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.184 23:17:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.184 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.184 [2024-06-07 23:17:44.079905] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:22.184 [2024-06-07 23:17:44.079973] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:22.184 [2024-06-07 23:17:44.173280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.184 [2024-06-07 23:17:44.249918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.184 [2024-06-07 23:17:44.250061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.184 [2024-06-07 23:17:44.250070] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.184 [2024-06-07 23:17:44.250078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.184 [2024-06-07 23:17:44.250236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.184 [2024-06-07 23:17:44.250398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:22.184 [2024-06-07 23:17:44.250701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:22.184 [2024-06-07 23:17:44.250704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.442 23:17:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.442 23:17:44 -- common/autotest_common.sh@852 -- # return 0 00:21:22.442 23:17:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:22.442 23:17:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 23:17:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.442 23:17:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.442 23:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 [2024-06-07 23:17:44.916937] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.442 23:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.442 23:17:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.442 23:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 Malloc0 00:21:22.442 23:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.442 23:17:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.442 23:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 23:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.442 23:17:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.442 23:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 23:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.442 23:17:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.442 23:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.442 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:22.442 [2024-06-07 23:17:44.958601] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.442 23:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.442 23:17:44 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:22.442 23:17:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:22.442 23:17:44 -- nvmf/common.sh@520 -- # config=() 00:21:22.442 23:17:44 -- nvmf/common.sh@520 -- # local subsystem config 00:21:22.442 23:17:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:22.442 23:17:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:22.442 { 00:21:22.442 "params": { 00:21:22.442 "name": "Nvme$subsystem", 00:21:22.442 "trtype": "$TEST_TRANSPORT", 00:21:22.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.442 "adrfam": "ipv4", 00:21:22.442 "trsvcid": "$NVMF_PORT", 00:21:22.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.442 "hdgst": ${hdgst:-false}, 00:21:22.442 "ddgst": ${ddgst:-false} 00:21:22.442 }, 00:21:22.442 "method": "bdev_nvme_attach_controller" 00:21:22.442 } 00:21:22.442 EOF 00:21:22.442 )") 00:21:22.442 23:17:44 -- nvmf/common.sh@542 -- # cat 00:21:22.442 23:17:44 -- nvmf/common.sh@544 -- # jq . 00:21:22.442 23:17:44 -- nvmf/common.sh@545 -- # IFS=, 00:21:22.442 23:17:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:22.442 "params": { 00:21:22.442 "name": "Nvme1", 00:21:22.442 "trtype": "tcp", 00:21:22.442 "traddr": "10.0.0.2", 00:21:22.442 "adrfam": "ipv4", 00:21:22.442 "trsvcid": "4420", 00:21:22.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.442 "hdgst": false, 00:21:22.442 "ddgst": false 00:21:22.442 }, 00:21:22.442 "method": "bdev_nvme_attach_controller" 00:21:22.442 }' 00:21:22.442 [2024-06-07 23:17:45.010561] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:22.442 [2024-06-07 23:17:45.010629] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2866372 ] 00:21:22.442 [2024-06-07 23:17:45.077375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:22.699 [2024-06-07 23:17:45.146052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.699 [2024-06-07 23:17:45.146201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.699 [2024-06-07 23:17:45.146203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.699 [2024-06-07 23:17:45.359708] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:22.699 [2024-06-07 23:17:45.359731] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:22.699 I/O targets: 00:21:22.699 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:22.699 00:21:22.699 00:21:22.699 CUnit - A unit testing framework for C - Version 2.1-3 00:21:22.699 http://cunit.sourceforge.net/ 00:21:22.699 00:21:22.699 00:21:22.699 Suite: bdevio tests on: Nvme1n1 00:21:22.955 Test: blockdev write read block ...passed 00:21:22.955 Test: blockdev write zeroes read block ...passed 00:21:22.955 Test: blockdev write zeroes read no split ...passed 00:21:22.955 Test: blockdev write zeroes read split ...passed 00:21:22.955 Test: blockdev write zeroes read split partial ...passed 00:21:22.955 Test: blockdev reset ...[2024-06-07 23:17:45.545445] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.955 [2024-06-07 23:17:45.545506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7671a0 (9): Bad file descriptor 00:21:22.955 [2024-06-07 23:17:45.599699] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:22.955 passed 00:21:23.212 Test: blockdev write read 8 blocks ...passed 00:21:23.212 Test: blockdev write read size > 128k ...passed 00:21:23.212 Test: blockdev write read invalid size ...passed 00:21:23.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:23.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:23.212 Test: blockdev write read max offset ...passed 00:21:23.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:23.212 Test: blockdev writev readv 8 blocks ...passed 00:21:23.212 Test: blockdev writev readv 30 x 1block ...passed 00:21:23.212 Test: blockdev writev readv block ...passed 00:21:23.212 Test: blockdev writev readv size > 128k ...passed 00:21:23.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:23.212 Test: blockdev comparev and writev ...[2024-06-07 23:17:45.822670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.822697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.822708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.822713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:23.212 [2024-06-07 23:17:45.823933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:23.212 [2024-06-07 23:17:45.823938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:23.212 passed 00:21:23.468 Test: blockdev nvme passthru rw ...passed 00:21:23.468 Test: blockdev nvme passthru vendor specific ...[2024-06-07 23:17:45.908890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.468 [2024-06-07 23:17:45.908901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:23.468 [2024-06-07 23:17:45.909187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.468 [2024-06-07 23:17:45.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:23.468 [2024-06-07 23:17:45.909454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.468 [2024-06-07 23:17:45.909461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:23.468 [2024-06-07 23:17:45.909737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.468 [2024-06-07 23:17:45.909744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:23.468 passed 00:21:23.468 Test: blockdev nvme admin passthru ...passed 00:21:23.468 Test: blockdev copy ...passed 00:21:23.468 00:21:23.468 Run Summary: Type Total Ran Passed Failed Inactive 00:21:23.468 suites 1 1 n/a 0 0 00:21:23.468 tests 23 23 23 0 0 00:21:23.468 asserts 152 152 152 0 n/a 00:21:23.468 00:21:23.468 Elapsed time = 1.226 seconds 00:21:23.724 23:17:46 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.724 23:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.724 23:17:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.724 23:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.724 23:17:46 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:23.724 23:17:46 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:23.724 23:17:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:23.725 23:17:46 -- nvmf/common.sh@116 -- # sync 00:21:23.725 23:17:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:23.725 23:17:46 -- nvmf/common.sh@119 -- # set +e 00:21:23.725 23:17:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:23.725 23:17:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:23.725 rmmod nvme_tcp 00:21:23.725 rmmod nvme_fabrics 00:21:23.725 rmmod nvme_keyring 00:21:23.725 23:17:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:23.725 23:17:46 -- nvmf/common.sh@123 -- # set -e 00:21:23.725 23:17:46 -- nvmf/common.sh@124 -- # return 0 00:21:23.725 23:17:46 -- nvmf/common.sh@477 -- # '[' -n 2866329 ']' 00:21:23.725 23:17:46 -- nvmf/common.sh@478 -- # killprocess 2866329 00:21:23.725 23:17:46 -- common/autotest_common.sh@926 -- # '[' -z 2866329 ']' 00:21:23.725 23:17:46 -- common/autotest_common.sh@930 -- # kill -0 2866329 00:21:23.725 23:17:46 -- common/autotest_common.sh@931 -- # uname 00:21:23.725 23:17:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:23.725 23:17:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2866329 00:21:23.725 23:17:46 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:23.725 23:17:46 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:23.725 23:17:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2866329' 00:21:23.725 killing process with pid 2866329 00:21:23.725 23:17:46 -- common/autotest_common.sh@945 -- # kill 2866329 00:21:23.725 23:17:46 -- common/autotest_common.sh@950 -- # wait 2866329 00:21:23.984 23:17:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:23.984 23:17:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:23.984 23:17:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:23.984 23:17:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.984 23:17:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:23.984 23:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.984 23:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.984 23:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.529 23:17:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:26.529 00:21:26.529 real 0m12.126s 00:21:26.529 user 0m13.380s 00:21:26.529 sys 0m6.348s 00:21:26.529 23:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.529 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:26.529 ************************************ 00:21:26.529 END TEST nvmf_bdevio_no_huge 00:21:26.529 ************************************ 00:21:26.529 23:17:48 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:26.529 23:17:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:26.529 23:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:26.529 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:26.529 ************************************ 00:21:26.529 START TEST nvmf_tls 00:21:26.529 ************************************ 00:21:26.529 23:17:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:26.529 * Looking for test storage... 00:21:26.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.529 23:17:48 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.529 23:17:48 -- nvmf/common.sh@7 -- # uname -s 00:21:26.529 23:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.529 23:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.529 23:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.529 23:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.529 23:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.529 23:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.529 23:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.529 23:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.529 23:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.529 23:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.529 23:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.529 23:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.529 23:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.529 23:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.529 23:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.529 23:17:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.529 23:17:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.529 23:17:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.529 23:17:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.529 23:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.529 23:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.529 23:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.529 23:17:48 -- paths/export.sh@5 -- # export PATH 00:21:26.529 23:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.529 23:17:48 -- nvmf/common.sh@46 -- # : 0 00:21:26.529 23:17:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:26.529 23:17:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:26.529 23:17:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:26.529 23:17:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.529 23:17:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.529 23:17:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:26.529 23:17:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:26.529 23:17:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:26.529 23:17:48 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.529 23:17:48 -- target/tls.sh@71 -- # nvmftestinit 00:21:26.529 23:17:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:26.529 23:17:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.529 23:17:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:26.529 23:17:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:26.529 23:17:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:26.529 23:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.529 23:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.529 23:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.529 23:17:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:26.529 23:17:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:26.529 23:17:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:26.529 23:17:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.117 23:17:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:33.117 23:17:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:33.117 23:17:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:33.117 23:17:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:33.117 23:17:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:33.117 23:17:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:33.117 23:17:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:33.117 23:17:55 -- nvmf/common.sh@294 -- # net_devs=() 00:21:33.117 23:17:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:33.117 23:17:55 -- nvmf/common.sh@295 -- # e810=() 00:21:33.117 23:17:55 -- nvmf/common.sh@295 -- # local -ga e810 00:21:33.117 23:17:55 -- nvmf/common.sh@296 -- # x722=() 00:21:33.117 23:17:55 -- nvmf/common.sh@296 -- # local -ga x722 00:21:33.117 23:17:55 -- nvmf/common.sh@297 -- # mlx=() 00:21:33.117 23:17:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:33.117 23:17:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.117 23:17:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:33.118 23:17:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:33.118 23:17:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:33.118 23:17:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:33.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:33.118 23:17:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:33.118 23:17:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:33.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:33.118 23:17:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:33.118 23:17:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.118 23:17:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.118 23:17:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:33.118 Found net devices under 0000:31:00.0: cvl_0_0 00:21:33.118 23:17:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.118 23:17:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:33.118 23:17:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.118 23:17:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.118 23:17:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:33.118 Found net devices under 0000:31:00.1: cvl_0_1 00:21:33.118 23:17:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.118 23:17:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:33.118 23:17:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:33.118 23:17:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:33.118 23:17:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.118 23:17:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.118 23:17:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.118 23:17:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:33.118 23:17:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.118 23:17:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.118 23:17:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:33.118 23:17:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.118 23:17:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.118 23:17:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:33.118 23:17:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:33.118 23:17:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.118 23:17:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.118 23:17:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.118 23:17:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.378 23:17:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:33.378 23:17:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.378 23:17:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.378 23:17:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.378 23:17:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:33.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:21:33.378 00:21:33.378 --- 10.0.0.2 ping statistics --- 00:21:33.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.378 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:21:33.378 23:17:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:21:33.379 00:21:33.379 --- 10.0.0.1 ping statistics --- 00:21:33.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.379 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:33.379 23:17:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.379 23:17:55 -- nvmf/common.sh@410 -- # return 0 00:21:33.379 23:17:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:33.379 23:17:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.379 23:17:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:33.379 23:17:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:33.379 23:17:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.379 23:17:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:33.379 23:17:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:33.379 23:17:55 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:33.379 23:17:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:33.379 23:17:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:33.379 23:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:33.379 23:17:55 -- nvmf/common.sh@469 -- # nvmfpid=2870942 00:21:33.379 23:17:55 -- nvmf/common.sh@470 -- # waitforlisten 2870942 00:21:33.379 23:17:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:33.379 23:17:55 -- common/autotest_common.sh@819 -- # '[' -z 2870942 ']' 00:21:33.379 23:17:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.379 23:17:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:33.379 23:17:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.379 23:17:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:33.379 23:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:33.379 [2024-06-07 23:17:56.043442] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:33.379 [2024-06-07 23:17:56.043504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.640 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.640 [2024-06-07 23:17:56.133711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.640 [2024-06-07 23:17:56.177710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.640 [2024-06-07 23:17:56.177858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.640 [2024-06-07 23:17:56.177867] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.640 [2024-06-07 23:17:56.177875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.640 [2024-06-07 23:17:56.177902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.211 23:17:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.211 23:17:56 -- common/autotest_common.sh@852 -- # return 0 00:21:34.212 23:17:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:34.212 23:17:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:34.212 23:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:34.212 23:17:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.212 23:17:56 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:34.212 23:17:56 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:34.476 true 00:21:34.476 23:17:56 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.476 23:17:56 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:34.476 23:17:57 -- target/tls.sh@82 -- # version=0 00:21:34.476 23:17:57 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:34.476 23:17:57 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:34.803 23:17:57 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.804 23:17:57 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:35.079 23:17:57 -- target/tls.sh@90 -- # version=13 00:21:35.079 23:17:57 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:35.079 23:17:57 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:35.079 23:17:57 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.079 23:17:57 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:35.339 23:17:57 -- target/tls.sh@98 -- # version=7 00:21:35.339 23:17:57 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:35.339 23:17:57 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.339 23:17:57 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:35.339 23:17:57 -- target/tls.sh@105 -- # ktls=false 00:21:35.339 23:17:57 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:35.339 23:17:57 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:35.600 23:17:58 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.600 23:17:58 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:35.921 23:17:58 -- target/tls.sh@113 -- # ktls=true 00:21:35.921 23:17:58 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:35.921 23:17:58 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:35.921 23:17:58 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.921 23:17:58 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:36.181 23:17:58 -- target/tls.sh@121 -- # ktls=false 00:21:36.181 23:17:58 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:36.181 23:17:58 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:36.181 23:17:58 -- target/tls.sh@49 -- # local key hash crc 00:21:36.181 23:17:58 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:36.181 23:17:58 -- target/tls.sh@51 -- # hash=01 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # head -c 4 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # gzip -1 -c 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # tail -c8 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # crc='p$H�' 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.181 23:17:58 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.181 23:17:58 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:36.181 23:17:58 -- target/tls.sh@49 -- # local key hash crc 00:21:36.181 23:17:58 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:36.181 23:17:58 -- target/tls.sh@51 -- # hash=01 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # gzip -1 -c 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # tail -c8 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # head -c 4 00:21:36.181 23:17:58 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:36.181 23:17:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:36.181 23:17:58 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:36.181 23:17:58 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.181 23:17:58 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:36.181 23:17:58 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.181 23:17:58 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:36.181 23:17:58 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.181 23:17:58 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:36.181 23:17:58 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:36.181 23:17:58 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:36.443 23:17:59 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.443 23:17:59 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.443 23:17:59 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.705 [2024-06-07 23:17:59.205701] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.705 23:17:59 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.705 23:17:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.967 [2024-06-07 23:17:59.486391] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.967 [2024-06-07 23:17:59.486566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.967 23:17:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.967 malloc0 00:21:37.227 23:17:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:37.227 23:17:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:37.488 23:17:59 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:37.488 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.484 Initializing NVMe Controllers 00:21:47.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:47.484 Initialization complete. Launching workers. 00:21:47.484 ======================================================== 00:21:47.484 Latency(us) 00:21:47.484 Device Information : IOPS MiB/s Average min max 00:21:47.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19700.79 76.96 3248.45 1330.29 3894.23 00:21:47.484 ======================================================== 00:21:47.484 Total : 19700.79 76.96 3248.45 1330.29 3894.23 00:21:47.484 00:21:47.484 23:18:10 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:47.484 23:18:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.484 23:18:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.484 23:18:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.484 23:18:10 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:47.484 23:18:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.484 23:18:10 -- target/tls.sh@28 -- # bdevperf_pid=2874219 00:21:47.484 23:18:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.484 23:18:10 -- target/tls.sh@31 -- # waitforlisten 2874219 /var/tmp/bdevperf.sock 00:21:47.484 23:18:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.484 23:18:10 -- common/autotest_common.sh@819 -- # '[' -z 2874219 ']' 00:21:47.484 23:18:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.484 23:18:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.484 23:18:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.484 23:18:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.484 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:21:47.484 [2024-06-07 23:18:10.082749] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:47.484 [2024-06-07 23:18:10.082808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874219 ] 00:21:47.484 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.484 [2024-06-07 23:18:10.134734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.484 [2024-06-07 23:18:10.161514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.424 23:18:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.424 23:18:10 -- common/autotest_common.sh@852 -- # return 0 00:21:48.424 23:18:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:48.424 [2024-06-07 23:18:10.969583] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.424 TLSTESTn1 00:21:48.424 23:18:11 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:48.683 Running I/O for 10 seconds... 00:21:58.669 00:21:58.669 Latency(us) 00:21:58.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.669 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.669 Verification LBA range: start 0x0 length 0x2000 00:21:58.669 TLSTESTn1 : 10.02 4327.98 16.91 0.00 0.00 29550.16 3358.72 41506.13 00:21:58.669 =================================================================================================================== 00:21:58.669 Total : 4327.98 16.91 0.00 0.00 29550.16 3358.72 41506.13 00:21:58.669 0 00:21:58.669 23:18:21 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.669 23:18:21 -- target/tls.sh@45 -- # killprocess 2874219 00:21:58.669 23:18:21 -- common/autotest_common.sh@926 -- # '[' -z 2874219 ']' 00:21:58.669 23:18:21 -- common/autotest_common.sh@930 -- # kill -0 2874219 00:21:58.669 23:18:21 -- common/autotest_common.sh@931 -- # uname 00:21:58.669 23:18:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.669 23:18:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2874219 00:21:58.669 23:18:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:58.669 23:18:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:58.669 23:18:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2874219' 00:21:58.669 killing process with pid 2874219 00:21:58.669 23:18:21 -- common/autotest_common.sh@945 -- # kill 2874219 00:21:58.669 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.669 00:21:58.669 Latency(us) 00:21:58.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.669 =================================================================================================================== 00:21:58.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.669 23:18:21 -- common/autotest_common.sh@950 -- # wait 2874219 00:21:58.929 23:18:21 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:58.929 23:18:21 -- common/autotest_common.sh@640 -- # local es=0 00:21:58.929 23:18:21 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:58.929 23:18:21 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:58.929 23:18:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:58.929 23:18:21 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:58.929 23:18:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:58.929 23:18:21 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:58.929 23:18:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.929 23:18:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:58.929 23:18:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.929 23:18:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:58.929 23:18:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.929 23:18:21 -- target/tls.sh@28 -- # bdevperf_pid=2876474 00:21:58.929 23:18:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.929 23:18:21 -- target/tls.sh@31 -- # waitforlisten 2876474 /var/tmp/bdevperf.sock 00:21:58.929 23:18:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.929 23:18:21 -- common/autotest_common.sh@819 -- # '[' -z 2876474 ']' 00:21:58.929 23:18:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.929 23:18:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.929 23:18:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.929 23:18:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.929 23:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:58.929 [2024-06-07 23:18:21.428436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:58.929 [2024-06-07 23:18:21.428488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876474 ] 00:21:58.929 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.929 [2024-06-07 23:18:21.478146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.929 [2024-06-07 23:18:21.504497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.496 23:18:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.496 23:18:22 -- common/autotest_common.sh@852 -- # return 0 00:21:59.496 23:18:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:59.755 [2024-06-07 23:18:22.304318] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.755 [2024-06-07 23:18:22.315950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:59.755 [2024-06-07 23:18:22.316314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212ed00 (107): Transport endpoint is not connected 00:21:59.755 [2024-06-07 23:18:22.317309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212ed00 (9): Bad file descriptor 00:21:59.755 [2024-06-07 23:18:22.318311] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:59.755 [2024-06-07 23:18:22.318319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:59.755 [2024-06-07 23:18:22.318325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.755 request: 00:21:59.755 { 00:21:59.755 "name": "TLSTEST", 00:21:59.755 "trtype": "tcp", 00:21:59.755 "traddr": "10.0.0.2", 00:21:59.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.755 "adrfam": "ipv4", 00:21:59.755 "trsvcid": "4420", 00:21:59.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.755 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:59.755 "method": "bdev_nvme_attach_controller", 00:21:59.755 "req_id": 1 00:21:59.755 } 00:21:59.755 Got JSON-RPC error response 00:21:59.755 response: 00:21:59.755 { 00:21:59.755 "code": -32602, 00:21:59.755 "message": "Invalid parameters" 00:21:59.755 } 00:21:59.755 23:18:22 -- target/tls.sh@36 -- # killprocess 2876474 00:21:59.755 23:18:22 -- common/autotest_common.sh@926 -- # '[' -z 2876474 ']' 00:21:59.755 23:18:22 -- common/autotest_common.sh@930 -- # kill -0 2876474 00:21:59.755 23:18:22 -- common/autotest_common.sh@931 -- # uname 00:21:59.755 23:18:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.756 23:18:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876474 00:21:59.756 23:18:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:59.756 23:18:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:59.756 23:18:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876474' 00:21:59.756 killing process with pid 2876474 00:21:59.756 23:18:22 -- common/autotest_common.sh@945 -- # kill 2876474 00:21:59.756 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.756 00:21:59.756 Latency(us) 00:21:59.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.756 =================================================================================================================== 00:21:59.756 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.756 23:18:22 -- common/autotest_common.sh@950 -- # wait 2876474 00:22:00.015 23:18:22 -- target/tls.sh@37 -- # return 1 00:22:00.015 23:18:22 -- common/autotest_common.sh@643 -- # es=1 00:22:00.015 23:18:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.015 23:18:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.015 23:18:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.015 23:18:22 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.015 23:18:22 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.015 23:18:22 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.015 23:18:22 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:00.015 23:18:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.015 23:18:22 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:00.015 23:18:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.015 23:18:22 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.015 23:18:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.015 23:18:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:00.015 23:18:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:00.015 23:18:22 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:00.015 23:18:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.015 23:18:22 -- target/tls.sh@28 -- # bdevperf_pid=2876812 00:22:00.015 23:18:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.015 23:18:22 -- target/tls.sh@31 -- # waitforlisten 2876812 /var/tmp/bdevperf.sock 00:22:00.015 23:18:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.015 23:18:22 -- common/autotest_common.sh@819 -- # '[' -z 2876812 ']' 00:22:00.015 23:18:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.015 23:18:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.015 23:18:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.015 23:18:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.015 23:18:22 -- common/autotest_common.sh@10 -- # set +x 00:22:00.015 [2024-06-07 23:18:22.546201] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:00.015 [2024-06-07 23:18:22.546278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876812 ] 00:22:00.015 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.015 [2024-06-07 23:18:22.596796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.015 [2024-06-07 23:18:22.622303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.953 23:18:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.953 23:18:23 -- common/autotest_common.sh@852 -- # return 0 00:22:00.953 23:18:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.953 [2024-06-07 23:18:23.405832] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.953 [2024-06-07 23:18:23.410196] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:00.953 [2024-06-07 23:18:23.410214] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:00.953 [2024-06-07 23:18:23.410235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:00.953 [2024-06-07 23:18:23.410872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843d00 (107): Transport endpoint is not connected 00:22:00.953 [2024-06-07 23:18:23.411865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843d00 (9): Bad file descriptor 00:22:00.953 [2024-06-07 23:18:23.412867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.953 [2024-06-07 23:18:23.412874] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:00.953 [2024-06-07 23:18:23.412881] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.953 request: 00:22:00.953 { 00:22:00.953 "name": "TLSTEST", 00:22:00.953 "trtype": "tcp", 00:22:00.953 "traddr": "10.0.0.2", 00:22:00.953 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:00.953 "adrfam": "ipv4", 00:22:00.953 "trsvcid": "4420", 00:22:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.953 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:00.953 "method": "bdev_nvme_attach_controller", 00:22:00.953 "req_id": 1 00:22:00.953 } 00:22:00.953 Got JSON-RPC error response 00:22:00.953 response: 00:22:00.953 { 00:22:00.953 "code": -32602, 00:22:00.953 "message": "Invalid parameters" 00:22:00.953 } 00:22:00.953 23:18:23 -- target/tls.sh@36 -- # killprocess 2876812 00:22:00.953 23:18:23 -- common/autotest_common.sh@926 -- # '[' -z 2876812 ']' 00:22:00.953 23:18:23 -- common/autotest_common.sh@930 -- # kill -0 2876812 00:22:00.953 23:18:23 -- common/autotest_common.sh@931 -- # uname 00:22:00.953 23:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.953 23:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876812 00:22:00.953 23:18:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:00.953 23:18:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:00.953 23:18:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876812' 00:22:00.953 killing process with pid 2876812 00:22:00.953 23:18:23 -- common/autotest_common.sh@945 -- # kill 2876812 00:22:00.953 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.953 00:22:00.953 Latency(us) 00:22:00.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.953 =================================================================================================================== 00:22:00.953 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.953 23:18:23 -- common/autotest_common.sh@950 -- # wait 2876812 00:22:00.953 23:18:23 -- target/tls.sh@37 -- # return 1 00:22:00.953 23:18:23 -- common/autotest_common.sh@643 -- # es=1 00:22:00.953 23:18:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.953 23:18:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.953 23:18:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.953 23:18:23 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.953 23:18:23 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.953 23:18:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.953 23:18:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:00.953 23:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.953 23:18:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:00.953 23:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.953 23:18:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:00.953 23:18:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:00.953 23:18:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:00.953 23:18:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:00.953 23:18:23 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:00.953 23:18:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.953 23:18:23 -- target/tls.sh@28 -- # bdevperf_pid=2876915 00:22:00.953 23:18:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:00.953 23:18:23 -- target/tls.sh@31 -- # waitforlisten 2876915 /var/tmp/bdevperf.sock 00:22:00.953 23:18:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.953 23:18:23 -- common/autotest_common.sh@819 -- # '[' -z 2876915 ']' 00:22:00.953 23:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.953 23:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.953 23:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.953 23:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.953 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.953 [2024-06-07 23:18:23.628351] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:00.953 [2024-06-07 23:18:23.628408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876915 ] 00:22:01.213 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.213 [2024-06-07 23:18:23.678260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.213 [2024-06-07 23:18:23.704559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.780 23:18:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.780 23:18:24 -- common/autotest_common.sh@852 -- # return 0 00:22:01.780 23:18:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.039 [2024-06-07 23:18:24.532432] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.039 [2024-06-07 23:18:24.541259] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.039 [2024-06-07 23:18:24.541276] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:02.039 [2024-06-07 23:18:24.541295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.039 [2024-06-07 23:18:24.541336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72d00 (107): Transport endpoint is not connected 00:22:02.039 [2024-06-07 23:18:24.542311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72d00 (9): Bad file descriptor 00:22:02.039 [2024-06-07 23:18:24.543313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:02.039 [2024-06-07 23:18:24.543320] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.039 [2024-06-07 23:18:24.543327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:02.039 request: 00:22:02.039 { 00:22:02.039 "name": "TLSTEST", 00:22:02.039 "trtype": "tcp", 00:22:02.039 "traddr": "10.0.0.2", 00:22:02.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.039 "adrfam": "ipv4", 00:22:02.039 "trsvcid": "4420", 00:22:02.039 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.039 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:02.039 "method": "bdev_nvme_attach_controller", 00:22:02.039 "req_id": 1 00:22:02.039 } 00:22:02.039 Got JSON-RPC error response 00:22:02.039 response: 00:22:02.039 { 00:22:02.039 "code": -32602, 00:22:02.039 "message": "Invalid parameters" 00:22:02.040 } 00:22:02.040 23:18:24 -- target/tls.sh@36 -- # killprocess 2876915 00:22:02.040 23:18:24 -- common/autotest_common.sh@926 -- # '[' -z 2876915 ']' 00:22:02.040 23:18:24 -- common/autotest_common.sh@930 -- # kill -0 2876915 00:22:02.040 23:18:24 -- common/autotest_common.sh@931 -- # uname 00:22:02.040 23:18:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.040 23:18:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876915 00:22:02.040 23:18:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:02.040 23:18:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:02.040 23:18:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876915' 00:22:02.040 killing process with pid 2876915 00:22:02.040 23:18:24 -- common/autotest_common.sh@945 -- # kill 2876915 00:22:02.040 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.040 00:22:02.040 Latency(us) 00:22:02.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.040 =================================================================================================================== 00:22:02.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.040 23:18:24 -- common/autotest_common.sh@950 -- # wait 2876915 00:22:02.040 23:18:24 -- target/tls.sh@37 -- # return 1 00:22:02.040 23:18:24 -- common/autotest_common.sh@643 -- # es=1 00:22:02.040 23:18:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:02.040 23:18:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:02.040 23:18:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:02.040 23:18:24 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.040 23:18:24 -- common/autotest_common.sh@640 -- # local es=0 00:22:02.040 23:18:24 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.040 23:18:24 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:02.040 23:18:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.040 23:18:24 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:02.040 23:18:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.040 23:18:24 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:02.040 23:18:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.040 23:18:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.040 23:18:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.040 23:18:24 -- target/tls.sh@23 -- # psk= 00:22:02.040 23:18:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.040 23:18:24 -- target/tls.sh@28 -- # bdevperf_pid=2877182 00:22:02.040 23:18:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.040 23:18:24 -- target/tls.sh@31 -- # waitforlisten 2877182 /var/tmp/bdevperf.sock 00:22:02.040 23:18:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.040 23:18:24 -- common/autotest_common.sh@819 -- # '[' -z 2877182 ']' 00:22:02.040 23:18:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.040 23:18:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.040 23:18:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.040 23:18:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.040 23:18:24 -- common/autotest_common.sh@10 -- # set +x 00:22:02.299 [2024-06-07 23:18:24.758313] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:02.299 [2024-06-07 23:18:24.758368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877182 ] 00:22:02.299 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.299 [2024-06-07 23:18:24.809326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.299 [2024-06-07 23:18:24.833814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.866 23:18:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.866 23:18:25 -- common/autotest_common.sh@852 -- # return 0 00:22:02.866 23:18:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.125 [2024-06-07 23:18:25.664324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.125 [2024-06-07 23:18:25.666151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bc330 (9): Bad file descriptor 00:22:03.125 [2024-06-07 23:18:25.667150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.125 [2024-06-07 23:18:25.667158] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.125 [2024-06-07 23:18:25.667165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.125 request: 00:22:03.125 { 00:22:03.125 "name": "TLSTEST", 00:22:03.125 "trtype": "tcp", 00:22:03.125 "traddr": "10.0.0.2", 00:22:03.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.125 "adrfam": "ipv4", 00:22:03.125 "trsvcid": "4420", 00:22:03.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.125 "method": "bdev_nvme_attach_controller", 00:22:03.125 "req_id": 1 00:22:03.125 } 00:22:03.125 Got JSON-RPC error response 00:22:03.125 response: 00:22:03.125 { 00:22:03.125 "code": -32602, 00:22:03.125 "message": "Invalid parameters" 00:22:03.125 } 00:22:03.125 23:18:25 -- target/tls.sh@36 -- # killprocess 2877182 00:22:03.125 23:18:25 -- common/autotest_common.sh@926 -- # '[' -z 2877182 ']' 00:22:03.125 23:18:25 -- common/autotest_common.sh@930 -- # kill -0 2877182 00:22:03.125 23:18:25 -- common/autotest_common.sh@931 -- # uname 00:22:03.125 23:18:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.125 23:18:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2877182 00:22:03.125 23:18:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:03.125 23:18:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:03.125 23:18:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2877182' 00:22:03.125 killing process with pid 2877182 00:22:03.125 23:18:25 -- common/autotest_common.sh@945 -- # kill 2877182 00:22:03.125 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.125 00:22:03.125 Latency(us) 00:22:03.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.125 =================================================================================================================== 00:22:03.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.125 23:18:25 -- common/autotest_common.sh@950 -- # wait 2877182 00:22:03.384 23:18:25 -- target/tls.sh@37 -- # return 1 00:22:03.384 23:18:25 -- common/autotest_common.sh@643 -- # es=1 00:22:03.384 23:18:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:03.384 23:18:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:03.384 23:18:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:03.384 23:18:25 -- target/tls.sh@167 -- # killprocess 2870942 00:22:03.384 23:18:25 -- common/autotest_common.sh@926 -- # '[' -z 2870942 ']' 00:22:03.384 23:18:25 -- common/autotest_common.sh@930 -- # kill -0 2870942 00:22:03.384 23:18:25 -- common/autotest_common.sh@931 -- # uname 00:22:03.384 23:18:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.384 23:18:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2870942 00:22:03.384 23:18:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:03.384 23:18:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:03.384 23:18:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2870942' 00:22:03.384 killing process with pid 2870942 00:22:03.384 23:18:25 -- common/autotest_common.sh@945 -- # kill 2870942 00:22:03.384 23:18:25 -- common/autotest_common.sh@950 -- # wait 2870942 00:22:03.384 23:18:26 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:22:03.384 23:18:26 -- target/tls.sh@49 -- # local key hash crc 00:22:03.384 23:18:26 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:03.384 23:18:26 -- target/tls.sh@51 -- # hash=02 00:22:03.384 23:18:26 -- target/tls.sh@52 -- # tail -c8 00:22:03.384 23:18:26 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:22:03.384 23:18:26 -- target/tls.sh@52 -- # gzip -1 -c 00:22:03.384 23:18:26 -- target/tls.sh@52 -- # head -c 4 00:22:03.384 23:18:26 -- target/tls.sh@52 -- # crc='�e�'\''' 00:22:03.384 23:18:26 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:03.384 23:18:26 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:22:03.384 23:18:26 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.384 23:18:26 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.384 23:18:26 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:03.384 23:18:26 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:03.384 23:18:26 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:03.384 23:18:26 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:22:03.384 23:18:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:03.384 23:18:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:03.384 23:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:03.384 23:18:26 -- nvmf/common.sh@469 -- # nvmfpid=2877540 00:22:03.384 23:18:26 -- nvmf/common.sh@470 -- # waitforlisten 2877540 00:22:03.384 23:18:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.384 23:18:26 -- common/autotest_common.sh@819 -- # '[' -z 2877540 ']' 00:22:03.384 23:18:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.384 23:18:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:03.384 23:18:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.384 23:18:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:03.384 23:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:03.644 [2024-06-07 23:18:26.100887] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:03.644 [2024-06-07 23:18:26.100941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.644 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.644 [2024-06-07 23:18:26.180636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.644 [2024-06-07 23:18:26.206811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.644 [2024-06-07 23:18:26.206906] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.644 [2024-06-07 23:18:26.206912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.644 [2024-06-07 23:18:26.206916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.644 [2024-06-07 23:18:26.206936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.211 23:18:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:04.212 23:18:26 -- common/autotest_common.sh@852 -- # return 0 00:22:04.212 23:18:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:04.212 23:18:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:04.212 23:18:26 -- common/autotest_common.sh@10 -- # set +x 00:22:04.212 23:18:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.212 23:18:26 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.212 23:18:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.212 23:18:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.471 [2024-06-07 23:18:27.007899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.471 23:18:27 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.731 23:18:27 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.731 [2024-06-07 23:18:27.304608] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.731 [2024-06-07 23:18:27.304786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.731 23:18:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.991 malloc0 00:22:04.991 23:18:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.991 23:18:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:05.249 23:18:27 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:05.249 23:18:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.249 23:18:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.249 23:18:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:05.249 23:18:27 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:05.249 23:18:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.249 23:18:27 -- target/tls.sh@28 -- # bdevperf_pid=2877907 00:22:05.249 23:18:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.249 23:18:27 -- target/tls.sh@31 -- # waitforlisten 2877907 /var/tmp/bdevperf.sock 00:22:05.249 23:18:27 -- common/autotest_common.sh@819 -- # '[' -z 2877907 ']' 00:22:05.249 23:18:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.249 23:18:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.249 23:18:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.249 23:18:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.249 23:18:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.249 23:18:27 -- common/autotest_common.sh@10 -- # set +x 00:22:05.249 [2024-06-07 23:18:27.789281] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:05.249 [2024-06-07 23:18:27.789330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877907 ] 00:22:05.250 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.250 [2024-06-07 23:18:27.839133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.250 [2024-06-07 23:18:27.865617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.187 23:18:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.187 23:18:28 -- common/autotest_common.sh@852 -- # return 0 00:22:06.188 23:18:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.188 [2024-06-07 23:18:28.705817] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.188 TLSTESTn1 00:22:06.188 23:18:28 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.448 Running I/O for 10 seconds... 00:22:16.440 00:22:16.440 Latency(us) 00:22:16.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:16.440 Verification LBA range: start 0x0 length 0x2000 00:22:16.440 TLSTESTn1 : 10.02 3530.48 13.79 0.00 0.00 36214.86 8082.77 55487.15 00:22:16.440 =================================================================================================================== 00:22:16.440 Total : 3530.48 13.79 0.00 0.00 36214.86 8082.77 55487.15 00:22:16.440 0 00:22:16.440 23:18:38 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.440 23:18:38 -- target/tls.sh@45 -- # killprocess 2877907 00:22:16.440 23:18:38 -- common/autotest_common.sh@926 -- # '[' -z 2877907 ']' 00:22:16.440 23:18:38 -- common/autotest_common.sh@930 -- # kill -0 2877907 00:22:16.440 23:18:38 -- common/autotest_common.sh@931 -- # uname 00:22:16.440 23:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:16.440 23:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2877907 00:22:16.440 23:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:16.440 23:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:16.440 23:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2877907' 00:22:16.440 killing process with pid 2877907 00:22:16.440 23:18:38 -- common/autotest_common.sh@945 -- # kill 2877907 00:22:16.440 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.440 00:22:16.440 Latency(us) 00:22:16.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.440 =================================================================================================================== 00:22:16.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.440 23:18:38 -- common/autotest_common.sh@950 -- # wait 2877907 00:22:16.440 23:18:39 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.440 23:18:39 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.440 23:18:39 -- common/autotest_common.sh@640 -- # local es=0 00:22:16.440 23:18:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.440 23:18:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:16.440 23:18:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:16.440 23:18:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:16.440 23:18:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:16.440 23:18:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.440 23:18:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.440 23:18:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.440 23:18:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.440 23:18:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:16.440 23:18:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.440 23:18:39 -- target/tls.sh@28 -- # bdevperf_pid=2880007 00:22:16.440 23:18:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.440 23:18:39 -- target/tls.sh@31 -- # waitforlisten 2880007 /var/tmp/bdevperf.sock 00:22:16.440 23:18:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.440 23:18:39 -- common/autotest_common.sh@819 -- # '[' -z 2880007 ']' 00:22:16.440 23:18:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.440 23:18:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:16.440 23:18:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.440 23:18:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:16.440 23:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.700 [2024-06-07 23:18:39.163814] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:16.700 [2024-06-07 23:18:39.163884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880007 ] 00:22:16.700 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.700 [2024-06-07 23:18:39.215707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.700 [2024-06-07 23:18:39.242037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.268 23:18:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:17.268 23:18:39 -- common/autotest_common.sh@852 -- # return 0 00:22:17.268 23:18:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:17.528 [2024-06-07 23:18:40.070142] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.528 [2024-06-07 23:18:40.070174] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:17.528 request: 00:22:17.528 { 00:22:17.528 "name": "TLSTEST", 00:22:17.528 "trtype": "tcp", 00:22:17.528 "traddr": "10.0.0.2", 00:22:17.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.528 "adrfam": "ipv4", 00:22:17.528 "trsvcid": "4420", 00:22:17.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.528 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:17.528 "method": "bdev_nvme_attach_controller", 00:22:17.528 "req_id": 1 00:22:17.528 } 00:22:17.528 Got JSON-RPC error response 00:22:17.528 response: 00:22:17.528 { 00:22:17.528 "code": -22, 00:22:17.528 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:17.528 } 00:22:17.528 23:18:40 -- target/tls.sh@36 -- # killprocess 2880007 00:22:17.528 23:18:40 -- common/autotest_common.sh@926 -- # '[' -z 2880007 ']' 00:22:17.528 23:18:40 -- common/autotest_common.sh@930 -- # kill -0 2880007 00:22:17.528 23:18:40 -- common/autotest_common.sh@931 -- # uname 00:22:17.528 23:18:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.528 23:18:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2880007 00:22:17.528 23:18:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:17.528 23:18:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:17.528 23:18:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2880007' 00:22:17.528 killing process with pid 2880007 00:22:17.528 23:18:40 -- common/autotest_common.sh@945 -- # kill 2880007 00:22:17.528 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.528 00:22:17.528 Latency(us) 00:22:17.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.528 =================================================================================================================== 00:22:17.528 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.528 23:18:40 -- common/autotest_common.sh@950 -- # wait 2880007 00:22:17.789 23:18:40 -- target/tls.sh@37 -- # return 1 00:22:17.789 23:18:40 -- common/autotest_common.sh@643 -- # es=1 00:22:17.789 23:18:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:17.789 23:18:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:17.789 23:18:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:17.789 23:18:40 -- target/tls.sh@183 -- # killprocess 2877540 00:22:17.789 23:18:40 -- common/autotest_common.sh@926 -- # '[' -z 2877540 ']' 00:22:17.789 23:18:40 -- common/autotest_common.sh@930 -- # kill -0 2877540 00:22:17.789 23:18:40 -- common/autotest_common.sh@931 -- # uname 00:22:17.789 23:18:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.789 23:18:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2877540 00:22:17.789 23:18:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:17.789 23:18:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:17.789 23:18:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2877540' 00:22:17.789 killing process with pid 2877540 00:22:17.789 23:18:40 -- common/autotest_common.sh@945 -- # kill 2877540 00:22:17.789 23:18:40 -- common/autotest_common.sh@950 -- # wait 2877540 00:22:17.789 23:18:40 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:17.789 23:18:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:17.789 23:18:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:17.789 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:22:17.789 23:18:40 -- nvmf/common.sh@469 -- # nvmfpid=2880297 00:22:17.789 23:18:40 -- nvmf/common.sh@470 -- # waitforlisten 2880297 00:22:17.789 23:18:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.789 23:18:40 -- common/autotest_common.sh@819 -- # '[' -z 2880297 ']' 00:22:17.789 23:18:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.789 23:18:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:17.789 23:18:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.789 23:18:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:17.789 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:22:17.789 [2024-06-07 23:18:40.459594] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:17.789 [2024-06-07 23:18:40.459646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.049 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.049 [2024-06-07 23:18:40.519952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.049 [2024-06-07 23:18:40.545535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:18.049 [2024-06-07 23:18:40.545636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.049 [2024-06-07 23:18:40.545642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.049 [2024-06-07 23:18:40.545648] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.049 [2024-06-07 23:18:40.545667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.618 23:18:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:18.618 23:18:41 -- common/autotest_common.sh@852 -- # return 0 00:22:18.618 23:18:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:18.618 23:18:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:18.618 23:18:41 -- common/autotest_common.sh@10 -- # set +x 00:22:18.618 23:18:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.618 23:18:41 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:18.618 23:18:41 -- common/autotest_common.sh@640 -- # local es=0 00:22:18.618 23:18:41 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:18.618 23:18:41 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:18.618 23:18:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:18.618 23:18:41 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:18.618 23:18:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:18.618 23:18:41 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:18.618 23:18:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:18.618 23:18:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.879 [2024-06-07 23:18:41.394773] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.879 23:18:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.879 23:18:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.139 [2024-06-07 23:18:41.679464] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.140 [2024-06-07 23:18:41.679637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.140 23:18:41 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.140 malloc0 00:22:19.399 23:18:41 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.399 23:18:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:19.660 [2024-06-07 23:18:42.110569] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:19.660 [2024-06-07 23:18:42.110591] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:19.660 [2024-06-07 23:18:42.110604] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:19.660 request: 00:22:19.660 { 00:22:19.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.660 "host": "nqn.2016-06.io.spdk:host1", 00:22:19.660 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:19.660 "method": "nvmf_subsystem_add_host", 00:22:19.660 "req_id": 1 00:22:19.660 } 00:22:19.660 Got JSON-RPC error response 00:22:19.660 response: 00:22:19.660 { 00:22:19.660 "code": -32603, 00:22:19.660 "message": "Internal error" 00:22:19.660 } 00:22:19.660 23:18:42 -- common/autotest_common.sh@643 -- # es=1 00:22:19.660 23:18:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:19.660 23:18:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:19.660 23:18:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:19.660 23:18:42 -- target/tls.sh@189 -- # killprocess 2880297 00:22:19.660 23:18:42 -- common/autotest_common.sh@926 -- # '[' -z 2880297 ']' 00:22:19.660 23:18:42 -- common/autotest_common.sh@930 -- # kill -0 2880297 00:22:19.660 23:18:42 -- common/autotest_common.sh@931 -- # uname 00:22:19.660 23:18:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.660 23:18:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2880297 00:22:19.660 23:18:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:19.660 23:18:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:19.660 23:18:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2880297' 00:22:19.660 killing process with pid 2880297 00:22:19.660 23:18:42 -- common/autotest_common.sh@945 -- # kill 2880297 00:22:19.660 23:18:42 -- common/autotest_common.sh@950 -- # wait 2880297 00:22:19.660 23:18:42 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:19.660 23:18:42 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:19.660 23:18:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:19.660 23:18:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:19.660 23:18:42 -- common/autotest_common.sh@10 -- # set +x 00:22:19.660 23:18:42 -- nvmf/common.sh@469 -- # nvmfpid=2880672 00:22:19.660 23:18:42 -- nvmf/common.sh@470 -- # waitforlisten 2880672 00:22:19.660 23:18:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.660 23:18:42 -- common/autotest_common.sh@819 -- # '[' -z 2880672 ']' 00:22:19.660 23:18:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.660 23:18:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.660 23:18:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.660 23:18:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.660 23:18:42 -- common/autotest_common.sh@10 -- # set +x 00:22:19.921 [2024-06-07 23:18:42.353182] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:19.921 [2024-06-07 23:18:42.353236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.921 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.921 [2024-06-07 23:18:42.434606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.921 [2024-06-07 23:18:42.461130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.921 [2024-06-07 23:18:42.461222] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.921 [2024-06-07 23:18:42.461229] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.921 [2024-06-07 23:18:42.461234] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.921 [2024-06-07 23:18:42.461253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.489 23:18:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.489 23:18:43 -- common/autotest_common.sh@852 -- # return 0 00:22:20.489 23:18:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:20.489 23:18:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:20.489 23:18:43 -- common/autotest_common.sh@10 -- # set +x 00:22:20.489 23:18:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.489 23:18:43 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.489 23:18:43 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.489 23:18:43 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:20.747 [2024-06-07 23:18:43.278334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.747 23:18:43 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.006 23:18:43 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.006 [2024-06-07 23:18:43.563036] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.006 [2024-06-07 23:18:43.563210] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.006 23:18:43 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.266 malloc0 00:22:21.266 23:18:43 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:21.266 23:18:43 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:21.528 23:18:44 -- target/tls.sh@197 -- # bdevperf_pid=2881039 00:22:21.528 23:18:44 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.528 23:18:44 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.528 23:18:44 -- target/tls.sh@200 -- # waitforlisten 2881039 /var/tmp/bdevperf.sock 00:22:21.528 23:18:44 -- common/autotest_common.sh@819 -- # '[' -z 2881039 ']' 00:22:21.528 23:18:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.528 23:18:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:21.528 23:18:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.528 23:18:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:21.528 23:18:44 -- common/autotest_common.sh@10 -- # set +x 00:22:21.528 [2024-06-07 23:18:44.071018] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:21.528 [2024-06-07 23:18:44.071067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881039 ] 00:22:21.528 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.528 [2024-06-07 23:18:44.121073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.528 [2024-06-07 23:18:44.147653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.149 23:18:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.149 23:18:44 -- common/autotest_common.sh@852 -- # return 0 00:22:22.149 23:18:44 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:22.409 [2024-06-07 23:18:44.931631] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.409 TLSTESTn1 00:22:22.409 23:18:45 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:22.668 23:18:45 -- target/tls.sh@205 -- # tgtconf='{ 00:22:22.668 "subsystems": [ 00:22:22.668 { 00:22:22.668 "subsystem": "iobuf", 00:22:22.668 "config": [ 00:22:22.669 { 00:22:22.669 "method": "iobuf_set_options", 00:22:22.669 "params": { 00:22:22.669 "small_pool_count": 8192, 00:22:22.669 "large_pool_count": 1024, 00:22:22.669 "small_bufsize": 8192, 00:22:22.669 "large_bufsize": 135168 00:22:22.669 } 00:22:22.669 } 00:22:22.669 ] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "sock", 00:22:22.669 "config": [ 00:22:22.669 { 00:22:22.669 "method": "sock_impl_set_options", 00:22:22.669 "params": { 00:22:22.669 "impl_name": "posix", 00:22:22.669 "recv_buf_size": 2097152, 00:22:22.669 "send_buf_size": 2097152, 00:22:22.669 "enable_recv_pipe": true, 00:22:22.669 "enable_quickack": false, 00:22:22.669 "enable_placement_id": 0, 00:22:22.669 "enable_zerocopy_send_server": true, 00:22:22.669 "enable_zerocopy_send_client": false, 00:22:22.669 "zerocopy_threshold": 0, 00:22:22.669 "tls_version": 0, 00:22:22.669 "enable_ktls": false 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "sock_impl_set_options", 00:22:22.669 "params": { 00:22:22.669 "impl_name": "ssl", 00:22:22.669 "recv_buf_size": 4096, 00:22:22.669 "send_buf_size": 4096, 00:22:22.669 "enable_recv_pipe": true, 00:22:22.669 "enable_quickack": false, 00:22:22.669 "enable_placement_id": 0, 00:22:22.669 "enable_zerocopy_send_server": true, 00:22:22.669 "enable_zerocopy_send_client": false, 00:22:22.669 "zerocopy_threshold": 0, 00:22:22.669 "tls_version": 0, 00:22:22.669 "enable_ktls": false 00:22:22.669 } 00:22:22.669 } 00:22:22.669 ] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "vmd", 00:22:22.669 "config": [] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "accel", 00:22:22.669 "config": [ 00:22:22.669 { 00:22:22.669 "method": "accel_set_options", 00:22:22.669 "params": { 00:22:22.669 "small_cache_size": 128, 00:22:22.669 "large_cache_size": 16, 00:22:22.669 "task_count": 2048, 00:22:22.669 "sequence_count": 2048, 00:22:22.669 "buf_count": 2048 00:22:22.669 } 00:22:22.669 } 00:22:22.669 ] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "bdev", 00:22:22.669 "config": [ 00:22:22.669 { 00:22:22.669 "method": "bdev_set_options", 00:22:22.669 "params": { 00:22:22.669 "bdev_io_pool_size": 65535, 00:22:22.669 "bdev_io_cache_size": 256, 00:22:22.669 "bdev_auto_examine": true, 00:22:22.669 "iobuf_small_cache_size": 128, 00:22:22.669 "iobuf_large_cache_size": 16 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_raid_set_options", 00:22:22.669 "params": { 00:22:22.669 "process_window_size_kb": 1024 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_iscsi_set_options", 00:22:22.669 "params": { 00:22:22.669 "timeout_sec": 30 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_nvme_set_options", 00:22:22.669 "params": { 00:22:22.669 "action_on_timeout": "none", 00:22:22.669 "timeout_us": 0, 00:22:22.669 "timeout_admin_us": 0, 00:22:22.669 "keep_alive_timeout_ms": 10000, 00:22:22.669 "transport_retry_count": 4, 00:22:22.669 "arbitration_burst": 0, 00:22:22.669 "low_priority_weight": 0, 00:22:22.669 "medium_priority_weight": 0, 00:22:22.669 "high_priority_weight": 0, 00:22:22.669 "nvme_adminq_poll_period_us": 10000, 00:22:22.669 "nvme_ioq_poll_period_us": 0, 00:22:22.669 "io_queue_requests": 0, 00:22:22.669 "delay_cmd_submit": true, 00:22:22.669 "bdev_retry_count": 3, 00:22:22.669 "transport_ack_timeout": 0, 00:22:22.669 "ctrlr_loss_timeout_sec": 0, 00:22:22.669 "reconnect_delay_sec": 0, 00:22:22.669 "fast_io_fail_timeout_sec": 0, 00:22:22.669 "generate_uuids": false, 00:22:22.669 "transport_tos": 0, 00:22:22.669 "io_path_stat": false, 00:22:22.669 "allow_accel_sequence": false 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_nvme_set_hotplug", 00:22:22.669 "params": { 00:22:22.669 "period_us": 100000, 00:22:22.669 "enable": false 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_malloc_create", 00:22:22.669 "params": { 00:22:22.669 "name": "malloc0", 00:22:22.669 "num_blocks": 8192, 00:22:22.669 "block_size": 4096, 00:22:22.669 "physical_block_size": 4096, 00:22:22.669 "uuid": "ee73fe3d-bd3e-41a1-a847-8fbe21875ea7", 00:22:22.669 "optimal_io_boundary": 0 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "bdev_wait_for_examine" 00:22:22.669 } 00:22:22.669 ] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "nbd", 00:22:22.669 "config": [] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "scheduler", 00:22:22.669 "config": [ 00:22:22.669 { 00:22:22.669 "method": "framework_set_scheduler", 00:22:22.669 "params": { 00:22:22.669 "name": "static" 00:22:22.669 } 00:22:22.669 } 00:22:22.669 ] 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "subsystem": "nvmf", 00:22:22.669 "config": [ 00:22:22.669 { 00:22:22.669 "method": "nvmf_set_config", 00:22:22.669 "params": { 00:22:22.669 "discovery_filter": "match_any", 00:22:22.669 "admin_cmd_passthru": { 00:22:22.669 "identify_ctrlr": false 00:22:22.669 } 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_set_max_subsystems", 00:22:22.669 "params": { 00:22:22.669 "max_subsystems": 1024 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_set_crdt", 00:22:22.669 "params": { 00:22:22.669 "crdt1": 0, 00:22:22.669 "crdt2": 0, 00:22:22.669 "crdt3": 0 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_create_transport", 00:22:22.669 "params": { 00:22:22.669 "trtype": "TCP", 00:22:22.669 "max_queue_depth": 128, 00:22:22.669 "max_io_qpairs_per_ctrlr": 127, 00:22:22.669 "in_capsule_data_size": 4096, 00:22:22.669 "max_io_size": 131072, 00:22:22.669 "io_unit_size": 131072, 00:22:22.669 "max_aq_depth": 128, 00:22:22.669 "num_shared_buffers": 511, 00:22:22.669 "buf_cache_size": 4294967295, 00:22:22.669 "dif_insert_or_strip": false, 00:22:22.669 "zcopy": false, 00:22:22.669 "c2h_success": false, 00:22:22.669 "sock_priority": 0, 00:22:22.669 "abort_timeout_sec": 1 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_create_subsystem", 00:22:22.669 "params": { 00:22:22.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.669 "allow_any_host": false, 00:22:22.669 "serial_number": "SPDK00000000000001", 00:22:22.669 "model_number": "SPDK bdev Controller", 00:22:22.669 "max_namespaces": 10, 00:22:22.669 "min_cntlid": 1, 00:22:22.669 "max_cntlid": 65519, 00:22:22.669 "ana_reporting": false 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_subsystem_add_host", 00:22:22.669 "params": { 00:22:22.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.669 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.669 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:22.669 } 00:22:22.669 }, 00:22:22.669 { 00:22:22.669 "method": "nvmf_subsystem_add_ns", 00:22:22.669 "params": { 00:22:22.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.669 "namespace": { 00:22:22.669 "nsid": 1, 00:22:22.669 "bdev_name": "malloc0", 00:22:22.669 "nguid": "EE73FE3DBD3E41A1A8478FBE21875EA7", 00:22:22.669 "uuid": "ee73fe3d-bd3e-41a1-a847-8fbe21875ea7" 00:22:22.669 } 00:22:22.669 } 00:22:22.670 }, 00:22:22.670 { 00:22:22.670 "method": "nvmf_subsystem_add_listener", 00:22:22.670 "params": { 00:22:22.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.670 "listen_address": { 00:22:22.670 "trtype": "TCP", 00:22:22.670 "adrfam": "IPv4", 00:22:22.670 "traddr": "10.0.0.2", 00:22:22.670 "trsvcid": "4420" 00:22:22.670 }, 00:22:22.670 "secure_channel": true 00:22:22.670 } 00:22:22.670 } 00:22:22.670 ] 00:22:22.670 } 00:22:22.670 ] 00:22:22.670 }' 00:22:22.670 23:18:45 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:22.929 23:18:45 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:22.929 "subsystems": [ 00:22:22.929 { 00:22:22.929 "subsystem": "iobuf", 00:22:22.929 "config": [ 00:22:22.929 { 00:22:22.929 "method": "iobuf_set_options", 00:22:22.929 "params": { 00:22:22.929 "small_pool_count": 8192, 00:22:22.929 "large_pool_count": 1024, 00:22:22.929 "small_bufsize": 8192, 00:22:22.929 "large_bufsize": 135168 00:22:22.930 } 00:22:22.930 } 00:22:22.930 ] 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "subsystem": "sock", 00:22:22.930 "config": [ 00:22:22.930 { 00:22:22.930 "method": "sock_impl_set_options", 00:22:22.930 "params": { 00:22:22.930 "impl_name": "posix", 00:22:22.930 "recv_buf_size": 2097152, 00:22:22.930 "send_buf_size": 2097152, 00:22:22.930 "enable_recv_pipe": true, 00:22:22.930 "enable_quickack": false, 00:22:22.930 "enable_placement_id": 0, 00:22:22.930 "enable_zerocopy_send_server": true, 00:22:22.930 "enable_zerocopy_send_client": false, 00:22:22.930 "zerocopy_threshold": 0, 00:22:22.930 "tls_version": 0, 00:22:22.930 "enable_ktls": false 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "sock_impl_set_options", 00:22:22.930 "params": { 00:22:22.930 "impl_name": "ssl", 00:22:22.930 "recv_buf_size": 4096, 00:22:22.930 "send_buf_size": 4096, 00:22:22.930 "enable_recv_pipe": true, 00:22:22.930 "enable_quickack": false, 00:22:22.930 "enable_placement_id": 0, 00:22:22.930 "enable_zerocopy_send_server": true, 00:22:22.930 "enable_zerocopy_send_client": false, 00:22:22.930 "zerocopy_threshold": 0, 00:22:22.930 "tls_version": 0, 00:22:22.930 "enable_ktls": false 00:22:22.930 } 00:22:22.930 } 00:22:22.930 ] 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "subsystem": "vmd", 00:22:22.930 "config": [] 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "subsystem": "accel", 00:22:22.930 "config": [ 00:22:22.930 { 00:22:22.930 "method": "accel_set_options", 00:22:22.930 "params": { 00:22:22.930 "small_cache_size": 128, 00:22:22.930 "large_cache_size": 16, 00:22:22.930 "task_count": 2048, 00:22:22.930 "sequence_count": 2048, 00:22:22.930 "buf_count": 2048 00:22:22.930 } 00:22:22.930 } 00:22:22.930 ] 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "subsystem": "bdev", 00:22:22.930 "config": [ 00:22:22.930 { 00:22:22.930 "method": "bdev_set_options", 00:22:22.930 "params": { 00:22:22.930 "bdev_io_pool_size": 65535, 00:22:22.930 "bdev_io_cache_size": 256, 00:22:22.930 "bdev_auto_examine": true, 00:22:22.930 "iobuf_small_cache_size": 128, 00:22:22.930 "iobuf_large_cache_size": 16 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_raid_set_options", 00:22:22.930 "params": { 00:22:22.930 "process_window_size_kb": 1024 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_iscsi_set_options", 00:22:22.930 "params": { 00:22:22.930 "timeout_sec": 30 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_nvme_set_options", 00:22:22.930 "params": { 00:22:22.930 "action_on_timeout": "none", 00:22:22.930 "timeout_us": 0, 00:22:22.930 "timeout_admin_us": 0, 00:22:22.930 "keep_alive_timeout_ms": 10000, 00:22:22.930 "transport_retry_count": 4, 00:22:22.930 "arbitration_burst": 0, 00:22:22.930 "low_priority_weight": 0, 00:22:22.930 "medium_priority_weight": 0, 00:22:22.930 "high_priority_weight": 0, 00:22:22.930 "nvme_adminq_poll_period_us": 10000, 00:22:22.930 "nvme_ioq_poll_period_us": 0, 00:22:22.930 "io_queue_requests": 512, 00:22:22.930 "delay_cmd_submit": true, 00:22:22.930 "bdev_retry_count": 3, 00:22:22.930 "transport_ack_timeout": 0, 00:22:22.930 "ctrlr_loss_timeout_sec": 0, 00:22:22.930 "reconnect_delay_sec": 0, 00:22:22.930 "fast_io_fail_timeout_sec": 0, 00:22:22.930 "generate_uuids": false, 00:22:22.930 "transport_tos": 0, 00:22:22.930 "io_path_stat": false, 00:22:22.930 "allow_accel_sequence": false 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_nvme_attach_controller", 00:22:22.930 "params": { 00:22:22.930 "name": "TLSTEST", 00:22:22.930 "trtype": "TCP", 00:22:22.930 "adrfam": "IPv4", 00:22:22.930 "traddr": "10.0.0.2", 00:22:22.930 "trsvcid": "4420", 00:22:22.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.930 "prchk_reftag": false, 00:22:22.930 "prchk_guard": false, 00:22:22.930 "ctrlr_loss_timeout_sec": 0, 00:22:22.930 "reconnect_delay_sec": 0, 00:22:22.930 "fast_io_fail_timeout_sec": 0, 00:22:22.930 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:22.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.930 "hdgst": false, 00:22:22.930 "ddgst": false 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_nvme_set_hotplug", 00:22:22.930 "params": { 00:22:22.930 "period_us": 100000, 00:22:22.930 "enable": false 00:22:22.930 } 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "method": "bdev_wait_for_examine" 00:22:22.930 } 00:22:22.930 ] 00:22:22.930 }, 00:22:22.930 { 00:22:22.930 "subsystem": "nbd", 00:22:22.930 "config": [] 00:22:22.930 } 00:22:22.930 ] 00:22:22.930 }' 00:22:22.930 23:18:45 -- target/tls.sh@208 -- # killprocess 2881039 00:22:22.930 23:18:45 -- common/autotest_common.sh@926 -- # '[' -z 2881039 ']' 00:22:22.930 23:18:45 -- common/autotest_common.sh@930 -- # kill -0 2881039 00:22:22.930 23:18:45 -- common/autotest_common.sh@931 -- # uname 00:22:22.930 23:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:22.930 23:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2881039 00:22:22.930 23:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:22.930 23:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:22.930 23:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2881039' 00:22:22.930 killing process with pid 2881039 00:22:22.930 23:18:45 -- common/autotest_common.sh@945 -- # kill 2881039 00:22:22.930 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.930 00:22:22.930 Latency(us) 00:22:22.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.930 =================================================================================================================== 00:22:22.930 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.930 23:18:45 -- common/autotest_common.sh@950 -- # wait 2881039 00:22:23.191 23:18:45 -- target/tls.sh@209 -- # killprocess 2880672 00:22:23.191 23:18:45 -- common/autotest_common.sh@926 -- # '[' -z 2880672 ']' 00:22:23.191 23:18:45 -- common/autotest_common.sh@930 -- # kill -0 2880672 00:22:23.191 23:18:45 -- common/autotest_common.sh@931 -- # uname 00:22:23.191 23:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.191 23:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2880672 00:22:23.191 23:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.191 23:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.191 23:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2880672' 00:22:23.191 killing process with pid 2880672 00:22:23.191 23:18:45 -- common/autotest_common.sh@945 -- # kill 2880672 00:22:23.191 23:18:45 -- common/autotest_common.sh@950 -- # wait 2880672 00:22:23.191 23:18:45 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:23.191 23:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:23.191 23:18:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:23.191 23:18:45 -- common/autotest_common.sh@10 -- # set +x 00:22:23.191 23:18:45 -- target/tls.sh@212 -- # echo '{ 00:22:23.191 "subsystems": [ 00:22:23.191 { 00:22:23.191 "subsystem": "iobuf", 00:22:23.191 "config": [ 00:22:23.191 { 00:22:23.191 "method": "iobuf_set_options", 00:22:23.191 "params": { 00:22:23.191 "small_pool_count": 8192, 00:22:23.191 "large_pool_count": 1024, 00:22:23.191 "small_bufsize": 8192, 00:22:23.191 "large_bufsize": 135168 00:22:23.191 } 00:22:23.191 } 00:22:23.191 ] 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "subsystem": "sock", 00:22:23.191 "config": [ 00:22:23.191 { 00:22:23.191 "method": "sock_impl_set_options", 00:22:23.191 "params": { 00:22:23.191 "impl_name": "posix", 00:22:23.191 "recv_buf_size": 2097152, 00:22:23.191 "send_buf_size": 2097152, 00:22:23.191 "enable_recv_pipe": true, 00:22:23.191 "enable_quickack": false, 00:22:23.191 "enable_placement_id": 0, 00:22:23.191 "enable_zerocopy_send_server": true, 00:22:23.191 "enable_zerocopy_send_client": false, 00:22:23.191 "zerocopy_threshold": 0, 00:22:23.191 "tls_version": 0, 00:22:23.191 "enable_ktls": false 00:22:23.191 } 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "method": "sock_impl_set_options", 00:22:23.191 "params": { 00:22:23.191 "impl_name": "ssl", 00:22:23.191 "recv_buf_size": 4096, 00:22:23.191 "send_buf_size": 4096, 00:22:23.191 "enable_recv_pipe": true, 00:22:23.191 "enable_quickack": false, 00:22:23.191 "enable_placement_id": 0, 00:22:23.191 "enable_zerocopy_send_server": true, 00:22:23.191 "enable_zerocopy_send_client": false, 00:22:23.191 "zerocopy_threshold": 0, 00:22:23.191 "tls_version": 0, 00:22:23.191 "enable_ktls": false 00:22:23.191 } 00:22:23.191 } 00:22:23.191 ] 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "subsystem": "vmd", 00:22:23.191 "config": [] 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "subsystem": "accel", 00:22:23.191 "config": [ 00:22:23.191 { 00:22:23.191 "method": "accel_set_options", 00:22:23.191 "params": { 00:22:23.191 "small_cache_size": 128, 00:22:23.191 "large_cache_size": 16, 00:22:23.191 "task_count": 2048, 00:22:23.191 "sequence_count": 2048, 00:22:23.191 "buf_count": 2048 00:22:23.191 } 00:22:23.191 } 00:22:23.191 ] 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "subsystem": "bdev", 00:22:23.191 "config": [ 00:22:23.191 { 00:22:23.191 "method": "bdev_set_options", 00:22:23.191 "params": { 00:22:23.191 "bdev_io_pool_size": 65535, 00:22:23.191 "bdev_io_cache_size": 256, 00:22:23.191 "bdev_auto_examine": true, 00:22:23.191 "iobuf_small_cache_size": 128, 00:22:23.191 "iobuf_large_cache_size": 16 00:22:23.191 } 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "method": "bdev_raid_set_options", 00:22:23.191 "params": { 00:22:23.191 "process_window_size_kb": 1024 00:22:23.191 } 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "method": "bdev_iscsi_set_options", 00:22:23.191 "params": { 00:22:23.191 "timeout_sec": 30 00:22:23.191 } 00:22:23.191 }, 00:22:23.191 { 00:22:23.191 "method": "bdev_nvme_set_options", 00:22:23.191 "params": { 00:22:23.191 "action_on_timeout": "none", 00:22:23.191 "timeout_us": 0, 00:22:23.191 "timeout_admin_us": 0, 00:22:23.191 "keep_alive_timeout_ms": 10000, 00:22:23.191 "transport_retry_count": 4, 00:22:23.191 "arbitration_burst": 0, 00:22:23.191 "low_priority_weight": 0, 00:22:23.191 "medium_priority_weight": 0, 00:22:23.191 "high_priority_weight": 0, 00:22:23.191 "nvme_adminq_poll_period_us": 10000, 00:22:23.192 "nvme_ioq_poll_period_us": 0, 00:22:23.192 "io_queue_requests": 0, 00:22:23.192 "delay_cmd_submit": true, 00:22:23.192 "bdev_retry_count": 3, 00:22:23.192 "transport_ack_timeout": 0, 00:22:23.192 "ctrlr_loss_timeout_sec": 0, 00:22:23.192 "reconnect_delay_sec": 0, 00:22:23.192 "fast_io_fail_timeout_sec": 0, 00:22:23.192 "generate_uuids": false, 00:22:23.192 "transport_tos": 0, 00:22:23.192 "io_path_stat": false, 00:22:23.192 "allow_accel_sequence": false 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "bdev_nvme_set_hotplug", 00:22:23.192 "params": { 00:22:23.192 "period_us": 100000, 00:22:23.192 "enable": false 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "bdev_malloc_create", 00:22:23.192 "params": { 00:22:23.192 "name": "malloc0", 00:22:23.192 "num_blocks": 8192, 00:22:23.192 "block_size": 4096, 00:22:23.192 "physical_block_size": 4096, 00:22:23.192 "uuid": "ee73fe3d-bd3e-41a1-a847-8fbe21875ea7", 00:22:23.192 "optimal_io_boundary": 0 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "bdev_wait_for_examine" 00:22:23.192 } 00:22:23.192 ] 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "subsystem": "nbd", 00:22:23.192 "config": [] 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "subsystem": "scheduler", 00:22:23.192 "config": [ 00:22:23.192 { 00:22:23.192 "method": "framework_set_scheduler", 00:22:23.192 "params": { 00:22:23.192 "name": "static" 00:22:23.192 } 00:22:23.192 } 00:22:23.192 ] 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "subsystem": "nvmf", 00:22:23.192 "config": [ 00:22:23.192 { 00:22:23.192 "method": "nvmf_set_config", 00:22:23.192 "params": { 00:22:23.192 "discovery_filter": "match_any", 00:22:23.192 "admin_cmd_passthru": { 00:22:23.192 "identify_ctrlr": false 00:22:23.192 } 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_set_max_subsystems", 00:22:23.192 "params": { 00:22:23.192 "max_subsystems": 1024 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_set_crdt", 00:22:23.192 "params": { 00:22:23.192 "crdt1": 0, 00:22:23.192 "crdt2": 0, 00:22:23.192 "crdt3": 0 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_create_transport", 00:22:23.192 "params": { 00:22:23.192 "trtype": "TCP", 00:22:23.192 "max_queue_depth": 128, 00:22:23.192 "max_io_qpairs_per_ctrlr": 127, 00:22:23.192 "in_capsule_data_size": 4096, 00:22:23.192 "max_io_size": 131072, 00:22:23.192 "io_unit_size": 131072, 00:22:23.192 "max_aq_depth": 128, 00:22:23.192 "num_shared_buffers": 511, 00:22:23.192 "buf_cache_size": 4294967295, 00:22:23.192 "dif_insert_or_strip": false, 00:22:23.192 "zcopy": false, 00:22:23.192 "c2h_success": false, 00:22:23.192 "sock_priority": 0, 00:22:23.192 "abort_timeout_sec": 1 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_create_subsystem", 00:22:23.192 "params": { 00:22:23.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.192 "allow_any_host": false, 00:22:23.192 "serial_number": "SPDK00000000000001", 00:22:23.192 "model_number": "SPDK bdev Controller", 00:22:23.192 "max_namespaces": 10, 00:22:23.192 "min_cntlid": 1, 00:22:23.192 "max_cntlid": 65519, 00:22:23.192 "ana_reporting": false 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_subsystem_add_host", 00:22:23.192 "params": { 00:22:23.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.192 "host": "nqn.2016-06.io.spdk:host1", 00:22:23.192 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_subsystem_add_ns", 00:22:23.192 "params": { 00:22:23.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.192 "namespace": { 00:22:23.192 "nsid": 1, 00:22:23.192 "bdev_name": "malloc0", 00:22:23.192 "nguid": "EE73FE3DBD3E41A1A8478FBE21875EA7", 00:22:23.192 "uuid": "ee73fe3d-bd3e-41a1-a847-8fbe21875ea7" 00:22:23.192 } 00:22:23.192 } 00:22:23.192 }, 00:22:23.192 { 00:22:23.192 "method": "nvmf_subsystem_add_listener", 00:22:23.192 "params": { 00:22:23.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.192 "listen_address": { 00:22:23.192 "trtype": "TCP", 00:22:23.192 "adrfam": "IPv4", 00:22:23.192 "traddr": "10.0.0.2", 00:22:23.192 "trsvcid": "4420" 00:22:23.192 }, 00:22:23.192 "secure_channel": true 00:22:23.192 } 00:22:23.192 } 00:22:23.192 ] 00:22:23.192 } 00:22:23.192 ] 00:22:23.192 }' 00:22:23.192 23:18:45 -- nvmf/common.sh@469 -- # nvmfpid=2881400 00:22:23.192 23:18:45 -- nvmf/common.sh@470 -- # waitforlisten 2881400 00:22:23.192 23:18:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:23.192 23:18:45 -- common/autotest_common.sh@819 -- # '[' -z 2881400 ']' 00:22:23.192 23:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.192 23:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.192 23:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.192 23:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.192 23:18:45 -- common/autotest_common.sh@10 -- # set +x 00:22:23.192 [2024-06-07 23:18:45.871600] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:23.192 [2024-06-07 23:18:45.871655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.452 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.452 [2024-06-07 23:18:45.956151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.452 [2024-06-07 23:18:45.983940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:23.452 [2024-06-07 23:18:45.984041] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.452 [2024-06-07 23:18:45.984051] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.452 [2024-06-07 23:18:45.984056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.452 [2024-06-07 23:18:45.984074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.712 [2024-06-07 23:18:46.154314] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.712 [2024-06-07 23:18:46.186343] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.712 [2024-06-07 23:18:46.186518] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.971 23:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:23.971 23:18:46 -- common/autotest_common.sh@852 -- # return 0 00:22:23.971 23:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:23.971 23:18:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:23.971 23:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:23.971 23:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.971 23:18:46 -- target/tls.sh@216 -- # bdevperf_pid=2881745 00:22:24.231 23:18:46 -- target/tls.sh@217 -- # waitforlisten 2881745 /var/tmp/bdevperf.sock 00:22:24.231 23:18:46 -- common/autotest_common.sh@819 -- # '[' -z 2881745 ']' 00:22:24.231 23:18:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.231 23:18:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.231 23:18:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.231 23:18:46 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:24.231 23:18:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.231 23:18:46 -- common/autotest_common.sh@10 -- # set +x 00:22:24.231 23:18:46 -- target/tls.sh@213 -- # echo '{ 00:22:24.231 "subsystems": [ 00:22:24.231 { 00:22:24.231 "subsystem": "iobuf", 00:22:24.231 "config": [ 00:22:24.231 { 00:22:24.231 "method": "iobuf_set_options", 00:22:24.231 "params": { 00:22:24.231 "small_pool_count": 8192, 00:22:24.231 "large_pool_count": 1024, 00:22:24.231 "small_bufsize": 8192, 00:22:24.231 "large_bufsize": 135168 00:22:24.231 } 00:22:24.231 } 00:22:24.231 ] 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "subsystem": "sock", 00:22:24.231 "config": [ 00:22:24.231 { 00:22:24.231 "method": "sock_impl_set_options", 00:22:24.231 "params": { 00:22:24.231 "impl_name": "posix", 00:22:24.231 "recv_buf_size": 2097152, 00:22:24.231 "send_buf_size": 2097152, 00:22:24.231 "enable_recv_pipe": true, 00:22:24.231 "enable_quickack": false, 00:22:24.231 "enable_placement_id": 0, 00:22:24.231 "enable_zerocopy_send_server": true, 00:22:24.231 "enable_zerocopy_send_client": false, 00:22:24.231 "zerocopy_threshold": 0, 00:22:24.231 "tls_version": 0, 00:22:24.231 "enable_ktls": false 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "sock_impl_set_options", 00:22:24.231 "params": { 00:22:24.231 "impl_name": "ssl", 00:22:24.231 "recv_buf_size": 4096, 00:22:24.231 "send_buf_size": 4096, 00:22:24.231 "enable_recv_pipe": true, 00:22:24.231 "enable_quickack": false, 00:22:24.231 "enable_placement_id": 0, 00:22:24.231 "enable_zerocopy_send_server": true, 00:22:24.231 "enable_zerocopy_send_client": false, 00:22:24.231 "zerocopy_threshold": 0, 00:22:24.231 "tls_version": 0, 00:22:24.231 "enable_ktls": false 00:22:24.231 } 00:22:24.231 } 00:22:24.231 ] 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "subsystem": "vmd", 00:22:24.231 "config": [] 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "subsystem": "accel", 00:22:24.231 "config": [ 00:22:24.231 { 00:22:24.231 "method": "accel_set_options", 00:22:24.231 "params": { 00:22:24.231 "small_cache_size": 128, 00:22:24.231 "large_cache_size": 16, 00:22:24.231 "task_count": 2048, 00:22:24.231 "sequence_count": 2048, 00:22:24.231 "buf_count": 2048 00:22:24.231 } 00:22:24.231 } 00:22:24.231 ] 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "subsystem": "bdev", 00:22:24.231 "config": [ 00:22:24.231 { 00:22:24.231 "method": "bdev_set_options", 00:22:24.231 "params": { 00:22:24.231 "bdev_io_pool_size": 65535, 00:22:24.231 "bdev_io_cache_size": 256, 00:22:24.231 "bdev_auto_examine": true, 00:22:24.231 "iobuf_small_cache_size": 128, 00:22:24.231 "iobuf_large_cache_size": 16 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_raid_set_options", 00:22:24.231 "params": { 00:22:24.231 "process_window_size_kb": 1024 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_iscsi_set_options", 00:22:24.231 "params": { 00:22:24.231 "timeout_sec": 30 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_nvme_set_options", 00:22:24.231 "params": { 00:22:24.231 "action_on_timeout": "none", 00:22:24.231 "timeout_us": 0, 00:22:24.231 "timeout_admin_us": 0, 00:22:24.231 "keep_alive_timeout_ms": 10000, 00:22:24.231 "transport_retry_count": 4, 00:22:24.231 "arbitration_burst": 0, 00:22:24.231 "low_priority_weight": 0, 00:22:24.231 "medium_priority_weight": 0, 00:22:24.231 "high_priority_weight": 0, 00:22:24.231 "nvme_adminq_poll_period_us": 10000, 00:22:24.231 "nvme_ioq_poll_period_us": 0, 00:22:24.231 "io_queue_requests": 512, 00:22:24.231 "delay_cmd_submit": true, 00:22:24.231 "bdev_retry_count": 3, 00:22:24.231 "transport_ack_timeout": 0, 00:22:24.231 "ctrlr_loss_timeout_sec": 0, 00:22:24.231 "reconnect_delay_sec": 0, 00:22:24.231 "fast_io_fail_timeout_sec": 0, 00:22:24.231 "generate_uuids": false, 00:22:24.231 "transport_tos": 0, 00:22:24.231 "io_path_stat": false, 00:22:24.231 "allow_accel_sequence": false 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_nvme_attach_controller", 00:22:24.231 "params": { 00:22:24.231 "name": "TLSTEST", 00:22:24.231 "trtype": "TCP", 00:22:24.231 "adrfam": "IPv4", 00:22:24.231 "traddr": "10.0.0.2", 00:22:24.231 "trsvcid": "4420", 00:22:24.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.231 "prchk_reftag": false, 00:22:24.231 "prchk_guard": false, 00:22:24.231 "ctrlr_loss_timeout_sec": 0, 00:22:24.231 "reconnect_delay_sec": 0, 00:22:24.231 "fast_io_fail_timeout_sec": 0, 00:22:24.231 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:24.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.231 "hdgst": false, 00:22:24.231 "ddgst": false 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_nvme_set_hotplug", 00:22:24.231 "params": { 00:22:24.231 "period_us": 100000, 00:22:24.231 "enable": false 00:22:24.231 } 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "method": "bdev_wait_for_examine" 00:22:24.231 } 00:22:24.231 ] 00:22:24.231 }, 00:22:24.231 { 00:22:24.231 "subsystem": "nbd", 00:22:24.232 "config": [] 00:22:24.232 } 00:22:24.232 ] 00:22:24.232 }' 00:22:24.232 [2024-06-07 23:18:46.708515] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:24.232 [2024-06-07 23:18:46.708579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881745 ] 00:22:24.232 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.232 [2024-06-07 23:18:46.760266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.232 [2024-06-07 23:18:46.786811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.232 [2024-06-07 23:18:46.897394] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.800 23:18:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:24.800 23:18:47 -- common/autotest_common.sh@852 -- # return 0 00:22:24.800 23:18:47 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:25.060 Running I/O for 10 seconds... 00:22:35.044 00:22:35.044 Latency(us) 00:22:35.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.044 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.044 Verification LBA range: start 0x0 length 0x2000 00:22:35.044 TLSTESTn1 : 10.02 3524.47 13.77 0.00 0.00 36279.38 3495.25 55924.05 00:22:35.044 =================================================================================================================== 00:22:35.044 Total : 3524.47 13.77 0.00 0.00 36279.38 3495.25 55924.05 00:22:35.044 0 00:22:35.044 23:18:57 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.044 23:18:57 -- target/tls.sh@223 -- # killprocess 2881745 00:22:35.044 23:18:57 -- common/autotest_common.sh@926 -- # '[' -z 2881745 ']' 00:22:35.044 23:18:57 -- common/autotest_common.sh@930 -- # kill -0 2881745 00:22:35.044 23:18:57 -- common/autotest_common.sh@931 -- # uname 00:22:35.044 23:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.044 23:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2881745 00:22:35.044 23:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:35.044 23:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:35.044 23:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2881745' 00:22:35.044 killing process with pid 2881745 00:22:35.044 23:18:57 -- common/autotest_common.sh@945 -- # kill 2881745 00:22:35.044 Received shutdown signal, test time was about 10.000000 seconds 00:22:35.044 00:22:35.044 Latency(us) 00:22:35.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.044 =================================================================================================================== 00:22:35.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.044 23:18:57 -- common/autotest_common.sh@950 -- # wait 2881745 00:22:35.303 23:18:57 -- target/tls.sh@224 -- # killprocess 2881400 00:22:35.303 23:18:57 -- common/autotest_common.sh@926 -- # '[' -z 2881400 ']' 00:22:35.303 23:18:57 -- common/autotest_common.sh@930 -- # kill -0 2881400 00:22:35.303 23:18:57 -- common/autotest_common.sh@931 -- # uname 00:22:35.303 23:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.303 23:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2881400 00:22:35.303 23:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:35.303 23:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:35.303 23:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2881400' 00:22:35.303 killing process with pid 2881400 00:22:35.303 23:18:57 -- common/autotest_common.sh@945 -- # kill 2881400 00:22:35.303 23:18:57 -- common/autotest_common.sh@950 -- # wait 2881400 00:22:35.303 23:18:57 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:35.303 23:18:57 -- target/tls.sh@227 -- # cleanup 00:22:35.303 23:18:57 -- target/tls.sh@15 -- # process_shm --id 0 00:22:35.303 23:18:57 -- common/autotest_common.sh@796 -- # type=--id 00:22:35.303 23:18:57 -- common/autotest_common.sh@797 -- # id=0 00:22:35.303 23:18:57 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:35.303 23:18:57 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:35.303 23:18:57 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:35.303 23:18:57 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:35.303 23:18:57 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:35.303 23:18:57 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:35.303 nvmf_trace.0 00:22:35.563 23:18:58 -- common/autotest_common.sh@811 -- # return 0 00:22:35.563 23:18:58 -- target/tls.sh@16 -- # killprocess 2881745 00:22:35.563 23:18:58 -- common/autotest_common.sh@926 -- # '[' -z 2881745 ']' 00:22:35.563 23:18:58 -- common/autotest_common.sh@930 -- # kill -0 2881745 00:22:35.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2881745) - No such process 00:22:35.563 23:18:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2881745 is not found' 00:22:35.563 Process with pid 2881745 is not found 00:22:35.563 23:18:58 -- target/tls.sh@17 -- # nvmftestfini 00:22:35.563 23:18:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:35.563 23:18:58 -- nvmf/common.sh@116 -- # sync 00:22:35.563 23:18:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:35.563 23:18:58 -- nvmf/common.sh@119 -- # set +e 00:22:35.563 23:18:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:35.563 23:18:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:35.563 rmmod nvme_tcp 00:22:35.563 rmmod nvme_fabrics 00:22:35.563 rmmod nvme_keyring 00:22:35.563 23:18:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:35.563 23:18:58 -- nvmf/common.sh@123 -- # set -e 00:22:35.563 23:18:58 -- nvmf/common.sh@124 -- # return 0 00:22:35.563 23:18:58 -- nvmf/common.sh@477 -- # '[' -n 2881400 ']' 00:22:35.563 23:18:58 -- nvmf/common.sh@478 -- # killprocess 2881400 00:22:35.563 23:18:58 -- common/autotest_common.sh@926 -- # '[' -z 2881400 ']' 00:22:35.563 23:18:58 -- common/autotest_common.sh@930 -- # kill -0 2881400 00:22:35.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2881400) - No such process 00:22:35.563 23:18:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2881400 is not found' 00:22:35.563 Process with pid 2881400 is not found 00:22:35.563 23:18:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.563 23:18:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:35.563 23:18:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:35.563 23:18:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.563 23:18:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:35.563 23:18:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.563 23:18:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.563 23:18:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.475 23:19:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:37.475 23:19:00 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:37.475 00:22:37.475 real 1m11.479s 00:22:37.475 user 1m43.235s 00:22:37.475 sys 0m27.214s 00:22:37.475 23:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.475 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:37.475 ************************************ 00:22:37.475 END TEST nvmf_tls 00:22:37.475 ************************************ 00:22:37.736 23:19:00 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:37.736 23:19:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.736 23:19:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.736 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:37.736 ************************************ 00:22:37.736 START TEST nvmf_fips 00:22:37.736 ************************************ 00:22:37.736 23:19:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:37.736 * Looking for test storage... 00:22:37.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:37.736 23:19:00 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.736 23:19:00 -- nvmf/common.sh@7 -- # uname -s 00:22:37.736 23:19:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.736 23:19:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.736 23:19:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.736 23:19:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.736 23:19:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.736 23:19:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.736 23:19:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.736 23:19:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.736 23:19:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.736 23:19:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.736 23:19:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.736 23:19:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.736 23:19:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.736 23:19:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.736 23:19:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.736 23:19:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.736 23:19:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.737 23:19:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.737 23:19:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.737 23:19:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.737 23:19:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.737 23:19:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.737 23:19:00 -- paths/export.sh@5 -- # export PATH 00:22:37.737 23:19:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.737 23:19:00 -- nvmf/common.sh@46 -- # : 0 00:22:37.737 23:19:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.737 23:19:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.737 23:19:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.737 23:19:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.737 23:19:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.737 23:19:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.737 23:19:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.737 23:19:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.737 23:19:00 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.737 23:19:00 -- fips/fips.sh@89 -- # check_openssl_version 00:22:37.737 23:19:00 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:37.737 23:19:00 -- fips/fips.sh@85 -- # openssl version 00:22:37.737 23:19:00 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:37.737 23:19:00 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:37.737 23:19:00 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:37.737 23:19:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:37.737 23:19:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:37.737 23:19:00 -- scripts/common.sh@335 -- # IFS=.-: 00:22:37.737 23:19:00 -- scripts/common.sh@335 -- # read -ra ver1 00:22:37.737 23:19:00 -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.737 23:19:00 -- scripts/common.sh@336 -- # read -ra ver2 00:22:37.737 23:19:00 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:37.737 23:19:00 -- scripts/common.sh@339 -- # ver1_l=3 00:22:37.737 23:19:00 -- scripts/common.sh@340 -- # ver2_l=3 00:22:37.737 23:19:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:37.737 23:19:00 -- scripts/common.sh@343 -- # case "$op" in 00:22:37.737 23:19:00 -- scripts/common.sh@347 -- # : 1 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # decimal 3 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=3 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 3 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # decimal 3 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=3 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 3 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:37.737 23:19:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.737 23:19:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v++ )) 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # decimal 0 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=0 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 0 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # ver1[v]=0 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # decimal 0 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=0 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 0 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:37.737 23:19:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.737 23:19:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v++ )) 00:22:37.737 23:19:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # decimal 9 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=9 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 9 00:22:37.737 23:19:00 -- scripts/common.sh@364 -- # ver1[v]=9 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # decimal 0 00:22:37.737 23:19:00 -- scripts/common.sh@352 -- # local d=0 00:22:37.737 23:19:00 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:37.737 23:19:00 -- scripts/common.sh@354 -- # echo 0 00:22:37.737 23:19:00 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:37.737 23:19:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.737 23:19:00 -- scripts/common.sh@366 -- # return 0 00:22:37.737 23:19:00 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:37.737 23:19:00 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:37.737 23:19:00 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:37.737 23:19:00 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:37.737 23:19:00 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:37.737 23:19:00 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:37.737 23:19:00 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:37.737 23:19:00 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:37.737 23:19:00 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:37.737 23:19:00 -- fips/fips.sh@114 -- # build_openssl_config 00:22:37.737 23:19:00 -- fips/fips.sh@37 -- # cat 00:22:37.737 23:19:00 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:37.737 23:19:00 -- fips/fips.sh@58 -- # cat - 00:22:37.737 23:19:00 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:37.737 23:19:00 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:37.737 23:19:00 -- fips/fips.sh@117 -- # mapfile -t providers 00:22:37.737 23:19:00 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:22:37.737 23:19:00 -- fips/fips.sh@117 -- # openssl list -providers 00:22:37.737 23:19:00 -- fips/fips.sh@117 -- # grep name 00:22:37.998 23:19:00 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:37.998 23:19:00 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:37.998 23:19:00 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:37.998 23:19:00 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:37.998 23:19:00 -- common/autotest_common.sh@640 -- # local es=0 00:22:37.998 23:19:00 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:37.998 23:19:00 -- common/autotest_common.sh@628 -- # local arg=openssl 00:22:37.998 23:19:00 -- fips/fips.sh@128 -- # : 00:22:37.998 23:19:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:37.998 23:19:00 -- common/autotest_common.sh@632 -- # type -t openssl 00:22:37.998 23:19:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:37.998 23:19:00 -- common/autotest_common.sh@634 -- # type -P openssl 00:22:37.998 23:19:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:37.998 23:19:00 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:22:37.998 23:19:00 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:22:37.998 23:19:00 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:22:37.998 Error setting digest 00:22:37.998 00E252489D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:37.998 00E252489D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:37.998 23:19:00 -- common/autotest_common.sh@643 -- # es=1 00:22:37.998 23:19:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:37.998 23:19:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:37.998 23:19:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:37.998 23:19:00 -- fips/fips.sh@131 -- # nvmftestinit 00:22:37.998 23:19:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.998 23:19:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.998 23:19:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.998 23:19:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.998 23:19:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.998 23:19:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.998 23:19:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.998 23:19:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.998 23:19:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:37.998 23:19:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.998 23:19:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.998 23:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:44.583 23:19:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:44.583 23:19:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:44.583 23:19:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:44.583 23:19:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:44.583 23:19:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:44.583 23:19:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:44.583 23:19:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:44.584 23:19:07 -- nvmf/common.sh@294 -- # net_devs=() 00:22:44.584 23:19:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:44.584 23:19:07 -- nvmf/common.sh@295 -- # e810=() 00:22:44.584 23:19:07 -- nvmf/common.sh@295 -- # local -ga e810 00:22:44.584 23:19:07 -- nvmf/common.sh@296 -- # x722=() 00:22:44.584 23:19:07 -- nvmf/common.sh@296 -- # local -ga x722 00:22:44.584 23:19:07 -- nvmf/common.sh@297 -- # mlx=() 00:22:44.584 23:19:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:44.584 23:19:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.584 23:19:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:44.584 23:19:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:44.584 23:19:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.584 23:19:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:44.584 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:44.584 23:19:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:44.584 23:19:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:44.584 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:44.584 23:19:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.584 23:19:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.584 23:19:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.584 23:19:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:44.584 Found net devices under 0000:31:00.0: cvl_0_0 00:22:44.584 23:19:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.584 23:19:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:44.584 23:19:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.584 23:19:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.584 23:19:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:44.584 Found net devices under 0000:31:00.1: cvl_0_1 00:22:44.584 23:19:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.584 23:19:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:44.584 23:19:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:44.584 23:19:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:44.584 23:19:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.584 23:19:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.584 23:19:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.584 23:19:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:44.584 23:19:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.584 23:19:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.584 23:19:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:44.584 23:19:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.584 23:19:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.584 23:19:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:44.584 23:19:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:44.584 23:19:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.584 23:19:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.845 23:19:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.845 23:19:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.845 23:19:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:44.845 23:19:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.845 23:19:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.845 23:19:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.845 23:19:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:44.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:22:44.845 00:22:44.845 --- 10.0.0.2 ping statistics --- 00:22:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.845 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:22:44.845 23:19:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:22:44.845 00:22:44.845 --- 10.0.0.1 ping statistics --- 00:22:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.845 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:44.845 23:19:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.845 23:19:07 -- nvmf/common.sh@410 -- # return 0 00:22:44.845 23:19:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:44.845 23:19:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.845 23:19:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:44.845 23:19:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:44.845 23:19:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.845 23:19:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:44.845 23:19:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:44.845 23:19:07 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:44.845 23:19:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:44.845 23:19:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:44.845 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 23:19:07 -- nvmf/common.sh@469 -- # nvmfpid=2888119 00:22:44.845 23:19:07 -- nvmf/common.sh@470 -- # waitforlisten 2888119 00:22:44.845 23:19:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.845 23:19:07 -- common/autotest_common.sh@819 -- # '[' -z 2888119 ']' 00:22:44.845 23:19:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.845 23:19:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.845 23:19:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.845 23:19:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.845 23:19:07 -- common/autotest_common.sh@10 -- # set +x 00:22:45.105 [2024-06-07 23:19:07.545899] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:45.105 [2024-06-07 23:19:07.545967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.105 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.105 [2024-06-07 23:19:07.635602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.105 [2024-06-07 23:19:07.680020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:45.105 [2024-06-07 23:19:07.680168] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.105 [2024-06-07 23:19:07.680178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.105 [2024-06-07 23:19:07.680184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.105 [2024-06-07 23:19:07.680213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.674 23:19:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.675 23:19:08 -- common/autotest_common.sh@852 -- # return 0 00:22:45.675 23:19:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.675 23:19:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:45.675 23:19:08 -- common/autotest_common.sh@10 -- # set +x 00:22:45.675 23:19:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.675 23:19:08 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:45.675 23:19:08 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:45.675 23:19:08 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.675 23:19:08 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:45.675 23:19:08 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.675 23:19:08 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.675 23:19:08 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:45.675 23:19:08 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:45.936 [2024-06-07 23:19:08.496676] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.936 [2024-06-07 23:19:08.512665] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.936 [2024-06-07 23:19:08.512923] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.936 malloc0 00:22:45.936 23:19:08 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.936 23:19:08 -- fips/fips.sh@148 -- # bdevperf_pid=2888259 00:22:45.936 23:19:08 -- fips/fips.sh@149 -- # waitforlisten 2888259 /var/tmp/bdevperf.sock 00:22:45.936 23:19:08 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.936 23:19:08 -- common/autotest_common.sh@819 -- # '[' -z 2888259 ']' 00:22:45.936 23:19:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.936 23:19:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:45.936 23:19:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.936 23:19:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:45.936 23:19:08 -- common/autotest_common.sh@10 -- # set +x 00:22:46.196 [2024-06-07 23:19:08.642857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:46.196 [2024-06-07 23:19:08.642931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888259 ] 00:22:46.196 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.196 [2024-06-07 23:19:08.698618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.196 [2024-06-07 23:19:08.733867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.765 23:19:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:46.765 23:19:09 -- common/autotest_common.sh@852 -- # return 0 00:22:46.765 23:19:09 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:47.025 [2024-06-07 23:19:09.501799] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.025 TLSTESTn1 00:22:47.025 23:19:09 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.025 Running I/O for 10 seconds... 00:22:59.254 00:22:59.254 Latency(us) 00:22:59.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.254 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.254 Verification LBA range: start 0x0 length 0x2000 00:22:59.254 TLSTESTn1 : 10.02 3668.80 14.33 0.00 0.00 34858.60 3495.25 53302.61 00:22:59.254 =================================================================================================================== 00:22:59.254 Total : 3668.80 14.33 0.00 0.00 34858.60 3495.25 53302.61 00:22:59.254 0 00:22:59.254 23:19:19 -- fips/fips.sh@1 -- # cleanup 00:22:59.254 23:19:19 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:59.254 23:19:19 -- common/autotest_common.sh@796 -- # type=--id 00:22:59.254 23:19:19 -- common/autotest_common.sh@797 -- # id=0 00:22:59.254 23:19:19 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:59.254 23:19:19 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:59.254 23:19:19 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:59.254 23:19:19 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:59.254 23:19:19 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:59.254 23:19:19 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:59.254 nvmf_trace.0 00:22:59.254 23:19:19 -- common/autotest_common.sh@811 -- # return 0 00:22:59.254 23:19:19 -- fips/fips.sh@16 -- # killprocess 2888259 00:22:59.254 23:19:19 -- common/autotest_common.sh@926 -- # '[' -z 2888259 ']' 00:22:59.254 23:19:19 -- common/autotest_common.sh@930 -- # kill -0 2888259 00:22:59.254 23:19:19 -- common/autotest_common.sh@931 -- # uname 00:22:59.254 23:19:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.254 23:19:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2888259 00:22:59.254 23:19:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:59.254 23:19:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:59.254 23:19:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2888259' 00:22:59.255 killing process with pid 2888259 00:22:59.255 23:19:19 -- common/autotest_common.sh@945 -- # kill 2888259 00:22:59.255 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.255 00:22:59.255 Latency(us) 00:22:59.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.255 =================================================================================================================== 00:22:59.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.255 23:19:19 -- common/autotest_common.sh@950 -- # wait 2888259 00:22:59.255 23:19:19 -- fips/fips.sh@17 -- # nvmftestfini 00:22:59.255 23:19:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:59.255 23:19:19 -- nvmf/common.sh@116 -- # sync 00:22:59.255 23:19:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:59.255 23:19:19 -- nvmf/common.sh@119 -- # set +e 00:22:59.255 23:19:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:59.255 23:19:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:59.255 rmmod nvme_tcp 00:22:59.255 rmmod nvme_fabrics 00:22:59.255 rmmod nvme_keyring 00:22:59.255 23:19:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:59.255 23:19:20 -- nvmf/common.sh@123 -- # set -e 00:22:59.255 23:19:20 -- nvmf/common.sh@124 -- # return 0 00:22:59.255 23:19:20 -- nvmf/common.sh@477 -- # '[' -n 2888119 ']' 00:22:59.255 23:19:20 -- nvmf/common.sh@478 -- # killprocess 2888119 00:22:59.255 23:19:20 -- common/autotest_common.sh@926 -- # '[' -z 2888119 ']' 00:22:59.255 23:19:20 -- common/autotest_common.sh@930 -- # kill -0 2888119 00:22:59.255 23:19:20 -- common/autotest_common.sh@931 -- # uname 00:22:59.255 23:19:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.255 23:19:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2888119 00:22:59.255 23:19:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:59.255 23:19:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:59.255 23:19:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2888119' 00:22:59.255 killing process with pid 2888119 00:22:59.255 23:19:20 -- common/autotest_common.sh@945 -- # kill 2888119 00:22:59.255 23:19:20 -- common/autotest_common.sh@950 -- # wait 2888119 00:22:59.255 23:19:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:59.255 23:19:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:59.255 23:19:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:59.255 23:19:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.255 23:19:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:59.255 23:19:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.255 23:19:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.255 23:19:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.826 23:19:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:59.826 23:19:22 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:59.826 00:22:59.826 real 0m22.086s 00:22:59.826 user 0m22.292s 00:22:59.826 sys 0m10.277s 00:22:59.826 23:19:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.826 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.826 ************************************ 00:22:59.826 END TEST nvmf_fips 00:22:59.826 ************************************ 00:22:59.826 23:19:22 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:59.826 23:19:22 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:59.826 23:19:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:59.826 23:19:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:59.826 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.826 ************************************ 00:22:59.826 START TEST nvmf_fuzz 00:22:59.826 ************************************ 00:22:59.826 23:19:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:59.826 * Looking for test storage... 00:22:59.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:59.826 23:19:22 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.826 23:19:22 -- nvmf/common.sh@7 -- # uname -s 00:22:59.826 23:19:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.826 23:19:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.826 23:19:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.826 23:19:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.826 23:19:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.826 23:19:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.826 23:19:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.826 23:19:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.826 23:19:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.826 23:19:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.826 23:19:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.826 23:19:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.826 23:19:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.827 23:19:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.827 23:19:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.827 23:19:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.827 23:19:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.827 23:19:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.827 23:19:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.827 23:19:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.827 23:19:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.827 23:19:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.827 23:19:22 -- paths/export.sh@5 -- # export PATH 00:22:59.827 23:19:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.827 23:19:22 -- nvmf/common.sh@46 -- # : 0 00:22:59.827 23:19:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:59.827 23:19:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:59.827 23:19:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:59.827 23:19:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.827 23:19:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.827 23:19:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:59.827 23:19:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:59.827 23:19:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:59.827 23:19:22 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:59.827 23:19:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:59.827 23:19:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.827 23:19:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:59.827 23:19:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:59.827 23:19:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:59.827 23:19:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.827 23:19:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.827 23:19:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.827 23:19:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:59.827 23:19:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:59.827 23:19:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:59.827 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 23:19:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:07.967 23:19:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:07.967 23:19:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:07.967 23:19:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:07.967 23:19:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:07.967 23:19:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:07.967 23:19:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:07.967 23:19:29 -- nvmf/common.sh@294 -- # net_devs=() 00:23:07.967 23:19:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:07.967 23:19:29 -- nvmf/common.sh@295 -- # e810=() 00:23:07.967 23:19:29 -- nvmf/common.sh@295 -- # local -ga e810 00:23:07.967 23:19:29 -- nvmf/common.sh@296 -- # x722=() 00:23:07.967 23:19:29 -- nvmf/common.sh@296 -- # local -ga x722 00:23:07.967 23:19:29 -- nvmf/common.sh@297 -- # mlx=() 00:23:07.967 23:19:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:07.967 23:19:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.967 23:19:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:07.967 23:19:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:07.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:07.967 23:19:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:07.967 23:19:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:07.967 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:07.967 23:19:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:07.967 23:19:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.967 23:19:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.967 23:19:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:07.967 Found net devices under 0000:31:00.0: cvl_0_0 00:23:07.967 23:19:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:07.967 23:19:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.967 23:19:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.967 23:19:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:07.967 Found net devices under 0000:31:00.1: cvl_0_1 00:23:07.967 23:19:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:07.967 23:19:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:07.967 23:19:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.967 23:19:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.967 23:19:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:07.967 23:19:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.967 23:19:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.967 23:19:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:07.967 23:19:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.967 23:19:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.967 23:19:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:07.967 23:19:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:07.967 23:19:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.967 23:19:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.967 23:19:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.967 23:19:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.967 23:19:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:07.967 23:19:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.967 23:19:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.967 23:19:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.967 23:19:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:07.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:23:07.967 00:23:07.967 --- 10.0.0.2 ping statistics --- 00:23:07.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.967 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:23:07.967 23:19:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:23:07.967 00:23:07.967 --- 10.0.0.1 ping statistics --- 00:23:07.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.967 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:23:07.967 23:19:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.967 23:19:29 -- nvmf/common.sh@410 -- # return 0 00:23:07.967 23:19:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:07.967 23:19:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.967 23:19:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:07.967 23:19:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.967 23:19:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:07.967 23:19:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:07.967 23:19:29 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:07.967 23:19:29 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2894697 00:23:07.967 23:19:29 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:07.967 23:19:29 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2894697 00:23:07.967 23:19:29 -- common/autotest_common.sh@819 -- # '[' -z 2894697 ']' 00:23:07.967 23:19:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.967 23:19:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:07.967 23:19:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.967 23:19:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:07.967 23:19:29 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 23:19:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:07.967 23:19:30 -- common/autotest_common.sh@852 -- # return 0 00:23:07.967 23:19:30 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.967 23:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.967 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 23:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.967 23:19:30 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:07.967 23:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.967 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 Malloc0 00:23:07.967 23:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.967 23:19:30 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.967 23:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.967 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 23:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.967 23:19:30 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.967 23:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.967 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.967 23:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.967 23:19:30 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.968 23:19:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.968 23:19:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.968 23:19:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.968 23:19:30 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:07.968 23:19:30 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:40.080 Fuzzing completed. Shutting down the fuzz application 00:23:40.081 00:23:40.081 Dumping successful admin opcodes: 00:23:40.081 8, 9, 10, 24, 00:23:40.081 Dumping successful io opcodes: 00:23:40.081 0, 9, 00:23:40.081 NS: 0x200003aeff00 I/O qp, Total commands completed: 969240, total successful commands: 5668, random_seed: 1179646016 00:23:40.081 NS: 0x200003aeff00 admin qp, Total commands completed: 122125, total successful commands: 1005, random_seed: 4075700160 00:23:40.081 23:20:00 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:40.081 Fuzzing completed. Shutting down the fuzz application 00:23:40.081 00:23:40.081 Dumping successful admin opcodes: 00:23:40.081 24, 00:23:40.081 Dumping successful io opcodes: 00:23:40.081 00:23:40.081 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 604478429 00:23:40.081 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 604550133 00:23:40.081 23:20:02 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.081 23:20:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:40.081 23:20:02 -- common/autotest_common.sh@10 -- # set +x 00:23:40.081 23:20:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:40.081 23:20:02 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:40.081 23:20:02 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:40.081 23:20:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:40.081 23:20:02 -- nvmf/common.sh@116 -- # sync 00:23:40.081 23:20:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:40.081 23:20:02 -- nvmf/common.sh@119 -- # set +e 00:23:40.081 23:20:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:40.081 23:20:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:40.081 rmmod nvme_tcp 00:23:40.081 rmmod nvme_fabrics 00:23:40.081 rmmod nvme_keyring 00:23:40.081 23:20:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:40.081 23:20:02 -- nvmf/common.sh@123 -- # set -e 00:23:40.081 23:20:02 -- nvmf/common.sh@124 -- # return 0 00:23:40.081 23:20:02 -- nvmf/common.sh@477 -- # '[' -n 2894697 ']' 00:23:40.081 23:20:02 -- nvmf/common.sh@478 -- # killprocess 2894697 00:23:40.081 23:20:02 -- common/autotest_common.sh@926 -- # '[' -z 2894697 ']' 00:23:40.081 23:20:02 -- common/autotest_common.sh@930 -- # kill -0 2894697 00:23:40.081 23:20:02 -- common/autotest_common.sh@931 -- # uname 00:23:40.081 23:20:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:40.081 23:20:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2894697 00:23:40.081 23:20:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:40.081 23:20:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:40.081 23:20:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2894697' 00:23:40.081 killing process with pid 2894697 00:23:40.081 23:20:02 -- common/autotest_common.sh@945 -- # kill 2894697 00:23:40.081 23:20:02 -- common/autotest_common.sh@950 -- # wait 2894697 00:23:40.081 23:20:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:40.081 23:20:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:40.081 23:20:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:40.081 23:20:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.081 23:20:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:40.081 23:20:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.081 23:20:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.081 23:20:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.121 23:20:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:42.121 23:20:04 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:42.121 00:23:42.121 real 0m42.070s 00:23:42.121 user 0m56.794s 00:23:42.121 sys 0m14.415s 00:23:42.121 23:20:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.121 23:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:42.121 ************************************ 00:23:42.121 END TEST nvmf_fuzz 00:23:42.121 ************************************ 00:23:42.121 23:20:04 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:42.121 23:20:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:42.121 23:20:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:42.121 23:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:42.121 ************************************ 00:23:42.121 START TEST nvmf_multiconnection 00:23:42.121 ************************************ 00:23:42.121 23:20:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:42.121 * Looking for test storage... 00:23:42.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:42.121 23:20:04 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.121 23:20:04 -- nvmf/common.sh@7 -- # uname -s 00:23:42.121 23:20:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.121 23:20:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.121 23:20:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.121 23:20:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.121 23:20:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.121 23:20:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.121 23:20:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.121 23:20:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.121 23:20:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.121 23:20:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.121 23:20:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:42.121 23:20:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:42.121 23:20:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.121 23:20:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.121 23:20:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.121 23:20:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.121 23:20:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.121 23:20:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.121 23:20:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.121 23:20:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.121 23:20:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.121 23:20:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.121 23:20:04 -- paths/export.sh@5 -- # export PATH 00:23:42.121 23:20:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.121 23:20:04 -- nvmf/common.sh@46 -- # : 0 00:23:42.121 23:20:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:42.121 23:20:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:42.121 23:20:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:42.121 23:20:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.121 23:20:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.121 23:20:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:42.121 23:20:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:42.121 23:20:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:42.121 23:20:04 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.121 23:20:04 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.121 23:20:04 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:42.121 23:20:04 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:42.121 23:20:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:42.121 23:20:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.121 23:20:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:42.121 23:20:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:42.121 23:20:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:42.121 23:20:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.121 23:20:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.121 23:20:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.121 23:20:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:42.121 23:20:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:42.121 23:20:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:42.121 23:20:04 -- common/autotest_common.sh@10 -- # set +x 00:23:50.264 23:20:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:50.264 23:20:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:50.264 23:20:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:50.264 23:20:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:50.264 23:20:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:50.264 23:20:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:50.264 23:20:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:50.264 23:20:11 -- nvmf/common.sh@294 -- # net_devs=() 00:23:50.264 23:20:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:50.264 23:20:11 -- nvmf/common.sh@295 -- # e810=() 00:23:50.264 23:20:11 -- nvmf/common.sh@295 -- # local -ga e810 00:23:50.264 23:20:11 -- nvmf/common.sh@296 -- # x722=() 00:23:50.264 23:20:11 -- nvmf/common.sh@296 -- # local -ga x722 00:23:50.264 23:20:11 -- nvmf/common.sh@297 -- # mlx=() 00:23:50.264 23:20:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:50.264 23:20:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.264 23:20:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:50.264 23:20:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:50.264 23:20:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:50.264 23:20:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:50.264 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:50.264 23:20:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:50.264 23:20:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:50.264 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:50.264 23:20:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:50.264 23:20:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.264 23:20:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.264 23:20:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:50.264 Found net devices under 0000:31:00.0: cvl_0_0 00:23:50.264 23:20:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.264 23:20:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:50.264 23:20:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.264 23:20:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.264 23:20:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:50.264 Found net devices under 0000:31:00.1: cvl_0_1 00:23:50.264 23:20:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.264 23:20:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:50.264 23:20:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:50.264 23:20:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:50.264 23:20:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.264 23:20:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.264 23:20:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.264 23:20:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:50.264 23:20:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.264 23:20:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.265 23:20:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:50.265 23:20:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.265 23:20:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.265 23:20:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:50.265 23:20:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:50.265 23:20:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.265 23:20:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.265 23:20:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.265 23:20:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.265 23:20:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:50.265 23:20:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.265 23:20:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.265 23:20:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.265 23:20:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:50.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:23:50.265 00:23:50.265 --- 10.0.0.2 ping statistics --- 00:23:50.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.265 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:23:50.265 23:20:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:23:50.265 00:23:50.265 --- 10.0.0.1 ping statistics --- 00:23:50.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.265 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:23:50.265 23:20:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.265 23:20:11 -- nvmf/common.sh@410 -- # return 0 00:23:50.265 23:20:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:50.265 23:20:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.265 23:20:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:50.265 23:20:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:50.265 23:20:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.265 23:20:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:50.265 23:20:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:50.265 23:20:11 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:50.265 23:20:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:50.265 23:20:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:50.265 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 23:20:11 -- nvmf/common.sh@469 -- # nvmfpid=2905371 00:23:50.265 23:20:11 -- nvmf/common.sh@470 -- # waitforlisten 2905371 00:23:50.265 23:20:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:50.265 23:20:11 -- common/autotest_common.sh@819 -- # '[' -z 2905371 ']' 00:23:50.265 23:20:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.265 23:20:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:50.265 23:20:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.265 23:20:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:50.265 23:20:11 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 [2024-06-07 23:20:12.029289] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:50.265 [2024-06-07 23:20:12.029354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.265 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.265 [2024-06-07 23:20:12.104007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.265 [2024-06-07 23:20:12.143037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:50.265 [2024-06-07 23:20:12.143194] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.265 [2024-06-07 23:20:12.143205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.265 [2024-06-07 23:20:12.143213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.265 [2024-06-07 23:20:12.143293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.265 [2024-06-07 23:20:12.143435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.265 [2024-06-07 23:20:12.143437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.265 [2024-06-07 23:20:12.143381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.265 23:20:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.265 23:20:12 -- common/autotest_common.sh@852 -- # return 0 00:23:50.265 23:20:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:50.265 23:20:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 23:20:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.265 23:20:12 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 [2024-06-07 23:20:12.848535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:50.265 23:20:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.265 23:20:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 Malloc1 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 [2024-06-07 23:20:12.915971] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.265 23:20:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.265 Malloc2 00:23:50.265 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.265 23:20:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:50.265 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.265 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:50.526 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:50.526 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.526 23:20:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:50.526 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 Malloc3 00:23:50.526 23:20:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:50.526 23:20:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:12 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.526 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 Malloc4 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.526 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 Malloc5 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.526 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 Malloc6 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.526 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.526 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:50.526 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.526 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.526 Malloc7 00:23:50.527 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.527 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:50.527 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.527 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.787 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 Malloc8 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.787 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 Malloc9 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.787 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 Malloc10 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.787 23:20:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 Malloc11 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:50.787 23:20:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.787 23:20:13 -- common/autotest_common.sh@10 -- # set +x 00:23:50.787 23:20:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.787 23:20:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:50.787 23:20:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.787 23:20:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:52.699 23:20:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:52.699 23:20:14 -- common/autotest_common.sh@1177 -- # local i=0 00:23:52.699 23:20:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.699 23:20:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:52.699 23:20:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:54.608 23:20:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:54.608 23:20:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:54.608 23:20:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:23:54.608 23:20:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:54.608 23:20:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.608 23:20:16 -- common/autotest_common.sh@1187 -- # return 0 00:23:54.608 23:20:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.608 23:20:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:55.991 23:20:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:55.991 23:20:18 -- common/autotest_common.sh@1177 -- # local i=0 00:23:55.991 23:20:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.991 23:20:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:55.991 23:20:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:57.901 23:20:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:57.901 23:20:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:57.901 23:20:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:57.901 23:20:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:57.901 23:20:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.901 23:20:20 -- common/autotest_common.sh@1187 -- # return 0 00:23:57.901 23:20:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.901 23:20:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:59.813 23:20:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:59.813 23:20:22 -- common/autotest_common.sh@1177 -- # local i=0 00:23:59.813 23:20:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.813 23:20:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:59.813 23:20:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:01.727 23:20:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:01.727 23:20:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:01.727 23:20:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:01.727 23:20:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:01.727 23:20:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.727 23:20:24 -- common/autotest_common.sh@1187 -- # return 0 00:24:01.727 23:20:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.727 23:20:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:03.113 23:20:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:03.113 23:20:25 -- common/autotest_common.sh@1177 -- # local i=0 00:24:03.113 23:20:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:03.113 23:20:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:03.113 23:20:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:05.027 23:20:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:05.288 23:20:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:05.288 23:20:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:05.288 23:20:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:05.288 23:20:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:05.288 23:20:27 -- common/autotest_common.sh@1187 -- # return 0 00:24:05.288 23:20:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:05.288 23:20:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:06.674 23:20:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:06.674 23:20:29 -- common/autotest_common.sh@1177 -- # local i=0 00:24:06.674 23:20:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:06.674 23:20:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:06.674 23:20:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:09.221 23:20:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:09.221 23:20:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:09.221 23:20:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:09.221 23:20:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:09.221 23:20:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.221 23:20:31 -- common/autotest_common.sh@1187 -- # return 0 00:24:09.221 23:20:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.221 23:20:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:10.606 23:20:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:10.606 23:20:33 -- common/autotest_common.sh@1177 -- # local i=0 00:24:10.606 23:20:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.606 23:20:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:10.606 23:20:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:12.518 23:20:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:12.518 23:20:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:12.518 23:20:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:12.518 23:20:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:12.519 23:20:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.519 23:20:35 -- common/autotest_common.sh@1187 -- # return 0 00:24:12.519 23:20:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.519 23:20:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:14.432 23:20:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:14.432 23:20:36 -- common/autotest_common.sh@1177 -- # local i=0 00:24:14.432 23:20:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.432 23:20:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:14.432 23:20:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:16.343 23:20:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:16.343 23:20:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:16.343 23:20:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:16.343 23:20:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:16.343 23:20:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.343 23:20:38 -- common/autotest_common.sh@1187 -- # return 0 00:24:16.343 23:20:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.343 23:20:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:17.727 23:20:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:17.727 23:20:40 -- common/autotest_common.sh@1177 -- # local i=0 00:24:17.727 23:20:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:17.727 23:20:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:17.727 23:20:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:20.269 23:20:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:20.269 23:20:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:20.269 23:20:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:20.269 23:20:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:20.269 23:20:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.269 23:20:42 -- common/autotest_common.sh@1187 -- # return 0 00:24:20.269 23:20:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.269 23:20:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:21.659 23:20:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:21.659 23:20:44 -- common/autotest_common.sh@1177 -- # local i=0 00:24:21.659 23:20:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.659 23:20:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:21.659 23:20:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:23.644 23:20:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:23.644 23:20:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:23.644 23:20:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:23.644 23:20:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:23.644 23:20:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.644 23:20:46 -- common/autotest_common.sh@1187 -- # return 0 00:24:23.645 23:20:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.645 23:20:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:25.561 23:20:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:25.561 23:20:48 -- common/autotest_common.sh@1177 -- # local i=0 00:24:25.561 23:20:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.561 23:20:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:25.561 23:20:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:27.475 23:20:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:27.475 23:20:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:27.475 23:20:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:27.475 23:20:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:27.475 23:20:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.475 23:20:50 -- common/autotest_common.sh@1187 -- # return 0 00:24:27.475 23:20:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.475 23:20:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:29.387 23:20:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:29.387 23:20:51 -- common/autotest_common.sh@1177 -- # local i=0 00:24:29.387 23:20:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.387 23:20:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:29.387 23:20:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:31.298 23:20:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:31.298 23:20:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:31.298 23:20:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:31.298 23:20:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:31.298 23:20:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.298 23:20:53 -- common/autotest_common.sh@1187 -- # return 0 00:24:31.298 23:20:53 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:31.298 [global] 00:24:31.298 thread=1 00:24:31.298 invalidate=1 00:24:31.298 rw=read 00:24:31.298 time_based=1 00:24:31.298 runtime=10 00:24:31.298 ioengine=libaio 00:24:31.298 direct=1 00:24:31.298 bs=262144 00:24:31.298 iodepth=64 00:24:31.298 norandommap=1 00:24:31.298 numjobs=1 00:24:31.298 00:24:31.298 [job0] 00:24:31.298 filename=/dev/nvme0n1 00:24:31.298 [job1] 00:24:31.298 filename=/dev/nvme10n1 00:24:31.298 [job2] 00:24:31.298 filename=/dev/nvme1n1 00:24:31.298 [job3] 00:24:31.298 filename=/dev/nvme2n1 00:24:31.298 [job4] 00:24:31.298 filename=/dev/nvme3n1 00:24:31.298 [job5] 00:24:31.298 filename=/dev/nvme4n1 00:24:31.298 [job6] 00:24:31.298 filename=/dev/nvme5n1 00:24:31.298 [job7] 00:24:31.298 filename=/dev/nvme6n1 00:24:31.298 [job8] 00:24:31.298 filename=/dev/nvme7n1 00:24:31.298 [job9] 00:24:31.298 filename=/dev/nvme8n1 00:24:31.298 [job10] 00:24:31.298 filename=/dev/nvme9n1 00:24:31.560 Could not set queue depth (nvme0n1) 00:24:31.560 Could not set queue depth (nvme10n1) 00:24:31.560 Could not set queue depth (nvme1n1) 00:24:31.560 Could not set queue depth (nvme2n1) 00:24:31.560 Could not set queue depth (nvme3n1) 00:24:31.560 Could not set queue depth (nvme4n1) 00:24:31.560 Could not set queue depth (nvme5n1) 00:24:31.560 Could not set queue depth (nvme6n1) 00:24:31.560 Could not set queue depth (nvme7n1) 00:24:31.560 Could not set queue depth (nvme8n1) 00:24:31.560 Could not set queue depth (nvme9n1) 00:24:31.820 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:31.820 fio-3.35 00:24:31.820 Starting 11 threads 00:24:44.049 00:24:44.049 job0: (groupid=0, jobs=1): err= 0: pid=2914023: Fri Jun 7 23:21:04 2024 00:24:44.049 read: IOPS=693, BW=173MiB/s (182MB/s)(1749MiB/10078msec) 00:24:44.049 slat (usec): min=6, max=130915, avg=1163.89, stdev=4390.68 00:24:44.049 clat (msec): min=2, max=224, avg=90.92, stdev=38.09 00:24:44.049 lat (msec): min=2, max=294, avg=92.09, stdev=38.71 00:24:44.049 clat percentiles (msec): 00:24:44.049 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 55], 00:24:44.049 | 30.00th=[ 69], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 105], 00:24:44.049 | 70.00th=[ 112], 80.00th=[ 124], 90.00th=[ 138], 95.00th=[ 150], 00:24:44.049 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 199], 99.95th=[ 211], 00:24:44.049 | 99.99th=[ 224] 00:24:44.049 bw ( KiB/s): min=109568, max=374272, per=6.93%, avg=177433.60, stdev=60282.82, samples=20 00:24:44.049 iops : min= 428, max= 1462, avg=693.10, stdev=235.48, samples=20 00:24:44.049 lat (msec) : 4=0.09%, 10=1.36%, 20=2.09%, 50=13.71%, 100=35.34% 00:24:44.049 lat (msec) : 250=47.41% 00:24:44.049 cpu : usr=0.36%, sys=2.12%, ctx=1914, majf=0, minf=3535 00:24:44.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:44.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.049 issued rwts: total=6994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.049 job1: (groupid=0, jobs=1): err= 0: pid=2914024: Fri Jun 7 23:21:04 2024 00:24:44.049 read: IOPS=742, BW=186MiB/s (195MB/s)(1871MiB/10074msec) 00:24:44.049 slat (usec): min=6, max=89252, avg=1238.82, stdev=3992.91 00:24:44.049 clat (msec): min=11, max=205, avg=84.78, stdev=34.43 00:24:44.049 lat (msec): min=11, max=217, avg=86.02, stdev=34.97 00:24:44.049 clat percentiles (msec): 00:24:44.049 | 1.00th=[ 20], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 56], 00:24:44.049 | 30.00th=[ 66], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 91], 00:24:44.049 | 70.00th=[ 101], 80.00th=[ 114], 90.00th=[ 136], 95.00th=[ 146], 00:24:44.049 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 197], 00:24:44.049 | 99.99th=[ 207] 00:24:44.049 bw ( KiB/s): min=108544, max=390144, per=7.42%, avg=189977.60, stdev=66844.51, samples=20 00:24:44.049 iops : min= 424, max= 1524, avg=742.10, stdev=261.11, samples=20 00:24:44.049 lat (msec) : 20=1.02%, 50=14.20%, 100=54.65%, 250=30.13% 00:24:44.049 cpu : usr=0.31%, sys=2.47%, ctx=1721, majf=0, minf=4097 00:24:44.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.049 issued rwts: total=7484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.049 job2: (groupid=0, jobs=1): err= 0: pid=2914025: Fri Jun 7 23:21:04 2024 00:24:44.049 read: IOPS=809, BW=202MiB/s (212MB/s)(2032MiB/10045msec) 00:24:44.049 slat (usec): min=6, max=102882, avg=1067.84, stdev=4273.84 00:24:44.049 clat (usec): min=1974, max=238457, avg=77942.27, stdev=44067.37 00:24:44.049 lat (msec): min=2, max=258, avg=79.01, stdev=44.88 00:24:44.049 clat percentiles (msec): 00:24:44.049 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 35], 00:24:44.049 | 30.00th=[ 45], 40.00th=[ 53], 50.00th=[ 70], 60.00th=[ 99], 00:24:44.049 | 70.00th=[ 109], 80.00th=[ 123], 90.00th=[ 136], 95.00th=[ 148], 00:24:44.049 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 201], 00:24:44.049 | 99.99th=[ 239] 00:24:44.049 bw ( KiB/s): min=101376, max=429568, per=8.06%, avg=206476.10, stdev=100031.13, samples=20 00:24:44.049 iops : min= 396, max= 1678, avg=806.50, stdev=390.79, samples=20 00:24:44.049 lat (msec) : 2=0.04%, 4=0.14%, 10=1.30%, 20=6.26%, 50=29.29% 00:24:44.049 lat (msec) : 100=24.51%, 250=38.46% 00:24:44.049 cpu : usr=0.37%, sys=2.74%, ctx=2133, majf=0, minf=4097 00:24:44.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.049 issued rwts: total=8128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.049 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.049 job3: (groupid=0, jobs=1): err= 0: pid=2914026: Fri Jun 7 23:21:04 2024 00:24:44.049 read: IOPS=1300, BW=325MiB/s (341MB/s)(3254MiB/10011msec) 00:24:44.049 slat (usec): min=6, max=22178, avg=764.96, stdev=1881.98 00:24:44.049 clat (msec): min=9, max=118, avg=48.38, stdev=17.35 00:24:44.049 lat (msec): min=10, max=118, avg=49.14, stdev=17.58 00:24:44.049 clat percentiles (msec): 00:24:44.049 | 1.00th=[ 25], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 31], 00:24:44.049 | 30.00th=[ 35], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 53], 00:24:44.049 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 71], 95.00th=[ 82], 00:24:44.049 | 99.00th=[ 97], 99.50th=[ 103], 99.90th=[ 110], 99.95th=[ 111], 00:24:44.049 | 99.99th=[ 118] 00:24:44.049 bw ( KiB/s): min=190976, max=497664, per=12.50%, avg=320215.58, stdev=83951.40, samples=19 00:24:44.049 iops : min= 746, max= 1944, avg=1250.84, stdev=327.94, samples=19 00:24:44.049 lat (msec) : 10=0.01%, 20=0.26%, 50=55.22%, 100=43.79%, 250=0.71% 00:24:44.049 cpu : usr=0.48%, sys=3.99%, ctx=2695, majf=0, minf=4097 00:24:44.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=13016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job4: (groupid=0, jobs=1): err= 0: pid=2914027: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=1173, BW=293MiB/s (308MB/s)(2944MiB/10037msec) 00:24:44.050 slat (usec): min=6, max=84979, avg=636.83, stdev=2337.06 00:24:44.050 clat (msec): min=2, max=176, avg=53.84, stdev=25.09 00:24:44.050 lat (msec): min=2, max=208, avg=54.48, stdev=25.44 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 36], 00:24:44.050 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 49], 60.00th=[ 55], 00:24:44.050 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 87], 95.00th=[ 107], 00:24:44.050 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:24:44.050 | 99.99th=[ 178] 00:24:44.050 bw ( KiB/s): min=148992, max=398848, per=11.71%, avg=299890.80, stdev=69335.33, samples=20 00:24:44.050 iops : min= 582, max= 1558, avg=1171.40, stdev=270.78, samples=20 00:24:44.050 lat (msec) : 4=0.15%, 10=1.79%, 20=3.24%, 50=47.72%, 100=40.85% 00:24:44.050 lat (msec) : 250=6.25% 00:24:44.050 cpu : usr=0.44%, sys=3.92%, ctx=3080, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=11776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job5: (groupid=0, jobs=1): err= 0: pid=2914029: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=1037, BW=259MiB/s (272MB/s)(2608MiB/10055msec) 00:24:44.050 slat (usec): min=6, max=120040, avg=884.34, stdev=4145.05 00:24:44.050 clat (msec): min=3, max=243, avg=60.67, stdev=37.50 00:24:44.050 lat (msec): min=3, max=243, avg=61.55, stdev=38.13 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 32], 00:24:44.050 | 30.00th=[ 36], 40.00th=[ 43], 50.00th=[ 50], 60.00th=[ 55], 00:24:44.050 | 70.00th=[ 65], 80.00th=[ 84], 90.00th=[ 128], 95.00th=[ 148], 00:24:44.050 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 218], 99.95th=[ 222], 00:24:44.050 | 99.99th=[ 245] 00:24:44.050 bw ( KiB/s): min=101376, max=466432, per=10.37%, avg=265472.00, stdev=108193.44, samples=20 00:24:44.050 iops : min= 396, max= 1822, avg=1037.00, stdev=422.63, samples=20 00:24:44.050 lat (msec) : 4=0.06%, 10=0.60%, 20=2.38%, 50=49.17%, 100=32.31% 00:24:44.050 lat (msec) : 250=15.48% 00:24:44.050 cpu : usr=0.30%, sys=3.36%, ctx=2354, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=10433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job6: (groupid=0, jobs=1): err= 0: pid=2914033: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=690, BW=173MiB/s (181MB/s)(1734MiB/10041msec) 00:24:44.050 slat (usec): min=8, max=91047, avg=1325.08, stdev=4458.40 00:24:44.050 clat (msec): min=3, max=224, avg=91.21, stdev=38.99 00:24:44.050 lat (msec): min=3, max=224, avg=92.53, stdev=39.66 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 45], 20.00th=[ 54], 00:24:44.050 | 30.00th=[ 64], 40.00th=[ 80], 50.00th=[ 97], 60.00th=[ 107], 00:24:44.050 | 70.00th=[ 115], 80.00th=[ 127], 90.00th=[ 140], 95.00th=[ 153], 00:24:44.050 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 199], 99.95th=[ 224], 00:24:44.050 | 99.99th=[ 226] 00:24:44.050 bw ( KiB/s): min=101888, max=324096, per=6.87%, avg=175974.40, stdev=54691.65, samples=20 00:24:44.050 iops : min= 398, max= 1266, avg=687.40, stdev=213.64, samples=20 00:24:44.050 lat (msec) : 4=0.07%, 10=1.90%, 20=2.08%, 50=10.09%, 100=38.22% 00:24:44.050 lat (msec) : 250=47.64% 00:24:44.050 cpu : usr=0.37%, sys=2.49%, ctx=1766, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=6937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job7: (groupid=0, jobs=1): err= 0: pid=2914035: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=1018, BW=255MiB/s (267MB/s)(2565MiB/10074msec) 00:24:44.050 slat (usec): min=5, max=131253, avg=718.44, stdev=3818.13 00:24:44.050 clat (usec): min=1581, max=207148, avg=62046.35, stdev=41133.07 00:24:44.050 lat (usec): min=1630, max=287439, avg=62764.79, stdev=41760.87 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 19], 20.00th=[ 25], 00:24:44.050 | 30.00th=[ 29], 40.00th=[ 39], 50.00th=[ 52], 60.00th=[ 67], 00:24:44.050 | 70.00th=[ 87], 80.00th=[ 106], 90.00th=[ 122], 95.00th=[ 138], 00:24:44.050 | 99.00th=[ 157], 99.50th=[ 171], 99.90th=[ 188], 99.95th=[ 192], 00:24:44.050 | 99.99th=[ 207] 00:24:44.050 bw ( KiB/s): min=129536, max=510464, per=10.19%, avg=261024.95, stdev=114831.57, samples=20 00:24:44.050 iops : min= 506, max= 1994, avg=1019.60, stdev=448.54, samples=20 00:24:44.050 lat (msec) : 2=0.02%, 4=0.49%, 10=3.24%, 20=7.47%, 50=37.34% 00:24:44.050 lat (msec) : 100=28.26%, 250=23.19% 00:24:44.050 cpu : usr=0.40%, sys=3.22%, ctx=2879, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=10259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job8: (groupid=0, jobs=1): err= 0: pid=2914041: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=626, BW=157MiB/s (164MB/s)(1578MiB/10069msec) 00:24:44.050 slat (usec): min=7, max=102394, avg=1186.73, stdev=4823.41 00:24:44.050 clat (msec): min=4, max=250, avg=100.84, stdev=33.34 00:24:44.050 lat (msec): min=4, max=250, avg=102.02, stdev=33.98 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 15], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 70], 00:24:44.050 | 30.00th=[ 81], 40.00th=[ 92], 50.00th=[ 105], 60.00th=[ 112], 00:24:44.050 | 70.00th=[ 120], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:24:44.050 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 224], 99.95th=[ 232], 00:24:44.050 | 99.99th=[ 251] 00:24:44.050 bw ( KiB/s): min=115712, max=240640, per=6.25%, avg=159935.65, stdev=35930.99, samples=20 00:24:44.050 iops : min= 452, max= 940, avg=624.70, stdev=140.41, samples=20 00:24:44.050 lat (msec) : 10=0.27%, 20=1.28%, 50=2.38%, 100=42.15%, 250=53.91% 00:24:44.050 lat (msec) : 500=0.02% 00:24:44.050 cpu : usr=0.23%, sys=2.13%, ctx=1859, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=6311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job9: (groupid=0, jobs=1): err= 0: pid=2914042: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=984, BW=246MiB/s (258MB/s)(2477MiB/10065msec) 00:24:44.050 slat (usec): min=5, max=80397, avg=807.03, stdev=2825.42 00:24:44.050 clat (msec): min=4, max=173, avg=64.16, stdev=29.08 00:24:44.050 lat (msec): min=4, max=199, avg=64.96, stdev=29.55 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 30], 20.00th=[ 41], 00:24:44.050 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 68], 00:24:44.050 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 121], 00:24:44.050 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:24:44.050 | 99.99th=[ 174] 00:24:44.050 bw ( KiB/s): min=129536, max=428032, per=9.84%, avg=251980.80, stdev=84946.90, samples=20 00:24:44.050 iops : min= 506, max= 1672, avg=984.30, stdev=331.82, samples=20 00:24:44.050 lat (msec) : 10=0.82%, 20=3.84%, 50=29.83%, 100=52.41%, 250=13.11% 00:24:44.050 cpu : usr=0.40%, sys=3.11%, ctx=2465, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=9907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 job10: (groupid=0, jobs=1): err= 0: pid=2914043: Fri Jun 7 23:21:04 2024 00:24:44.050 read: IOPS=952, BW=238MiB/s (250MB/s)(2392MiB/10045msec) 00:24:44.050 slat (usec): min=5, max=80955, avg=902.37, stdev=2870.60 00:24:44.050 clat (msec): min=2, max=153, avg=66.22, stdev=23.72 00:24:44.050 lat (msec): min=2, max=167, avg=67.12, stdev=24.04 00:24:44.050 clat percentiles (msec): 00:24:44.050 | 1.00th=[ 12], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 49], 00:24:44.050 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:24:44.050 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 107], 00:24:44.050 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 148], 00:24:44.050 | 99.99th=[ 155] 00:24:44.050 bw ( KiB/s): min=161280, max=325632, per=9.50%, avg=243353.60, stdev=40785.18, samples=20 00:24:44.050 iops : min= 630, max= 1272, avg=950.60, stdev=159.32, samples=20 00:24:44.050 lat (msec) : 4=0.19%, 10=0.49%, 20=2.62%, 50=18.36%, 100=70.83% 00:24:44.050 lat (msec) : 250=7.50% 00:24:44.050 cpu : usr=0.28%, sys=2.88%, ctx=2185, majf=0, minf=4097 00:24:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.050 issued rwts: total=9569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.050 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.050 00:24:44.050 Run status group 0 (all jobs): 00:24:44.051 READ: bw=2501MiB/s (2622MB/s), 157MiB/s-325MiB/s (164MB/s-341MB/s), io=24.6GiB (26.4GB), run=10011-10078msec 00:24:44.051 00:24:44.051 Disk stats (read/write): 00:24:44.051 nvme0n1: ios=13719/0, merge=0/0, ticks=1216908/0, in_queue=1216908, util=96.46% 00:24:44.051 nvme10n1: ios=14693/0, merge=0/0, ticks=1214421/0, in_queue=1214421, util=96.64% 00:24:44.051 nvme1n1: ios=15677/0, merge=0/0, ticks=1218691/0, in_queue=1218691, util=97.01% 00:24:44.051 nvme2n1: ios=25295/0, merge=0/0, ticks=1219931/0, in_queue=1219931, util=97.25% 00:24:44.051 nvme3n1: ios=23202/0, merge=0/0, ticks=1221334/0, in_queue=1221334, util=97.39% 00:24:44.051 nvme4n1: ios=20426/0, merge=0/0, ticks=1217923/0, in_queue=1217923, util=97.82% 00:24:44.051 nvme5n1: ios=13382/0, merge=0/0, ticks=1214527/0, in_queue=1214527, util=97.99% 00:24:44.051 nvme6n1: ios=20236/0, merge=0/0, ticks=1222164/0, in_queue=1222164, util=98.19% 00:24:44.051 nvme7n1: ios=12339/0, merge=0/0, ticks=1219779/0, in_queue=1219779, util=98.66% 00:24:44.051 nvme8n1: ios=19536/0, merge=0/0, ticks=1220790/0, in_queue=1220790, util=98.93% 00:24:44.051 nvme9n1: ios=18700/0, merge=0/0, ticks=1221903/0, in_queue=1221903, util=99.12% 00:24:44.051 23:21:04 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:44.051 [global] 00:24:44.051 thread=1 00:24:44.051 invalidate=1 00:24:44.051 rw=randwrite 00:24:44.051 time_based=1 00:24:44.051 runtime=10 00:24:44.051 ioengine=libaio 00:24:44.051 direct=1 00:24:44.051 bs=262144 00:24:44.051 iodepth=64 00:24:44.051 norandommap=1 00:24:44.051 numjobs=1 00:24:44.051 00:24:44.051 [job0] 00:24:44.051 filename=/dev/nvme0n1 00:24:44.051 [job1] 00:24:44.051 filename=/dev/nvme10n1 00:24:44.051 [job2] 00:24:44.051 filename=/dev/nvme1n1 00:24:44.051 [job3] 00:24:44.051 filename=/dev/nvme2n1 00:24:44.051 [job4] 00:24:44.051 filename=/dev/nvme3n1 00:24:44.051 [job5] 00:24:44.051 filename=/dev/nvme4n1 00:24:44.051 [job6] 00:24:44.051 filename=/dev/nvme5n1 00:24:44.051 [job7] 00:24:44.051 filename=/dev/nvme6n1 00:24:44.051 [job8] 00:24:44.051 filename=/dev/nvme7n1 00:24:44.051 [job9] 00:24:44.051 filename=/dev/nvme8n1 00:24:44.051 [job10] 00:24:44.051 filename=/dev/nvme9n1 00:24:44.051 Could not set queue depth (nvme0n1) 00:24:44.051 Could not set queue depth (nvme10n1) 00:24:44.051 Could not set queue depth (nvme1n1) 00:24:44.051 Could not set queue depth (nvme2n1) 00:24:44.051 Could not set queue depth (nvme3n1) 00:24:44.051 Could not set queue depth (nvme4n1) 00:24:44.051 Could not set queue depth (nvme5n1) 00:24:44.051 Could not set queue depth (nvme6n1) 00:24:44.051 Could not set queue depth (nvme7n1) 00:24:44.051 Could not set queue depth (nvme8n1) 00:24:44.051 Could not set queue depth (nvme9n1) 00:24:44.051 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.051 fio-3.35 00:24:44.051 Starting 11 threads 00:24:54.054 00:24:54.054 job0: (groupid=0, jobs=1): err= 0: pid=2916250: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=766, BW=192MiB/s (201MB/s)(1932MiB/10082msec); 0 zone resets 00:24:54.055 slat (usec): min=19, max=86161, avg=1141.42, stdev=2544.83 00:24:54.055 clat (msec): min=5, max=170, avg=82.34, stdev=25.31 00:24:54.055 lat (msec): min=6, max=170, avg=83.48, stdev=25.72 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 51], 20.00th=[ 65], 00:24:54.055 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 88], 00:24:54.055 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 123], 00:24:54.055 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 165], 00:24:54.055 | 99.99th=[ 171] 00:24:54.055 bw ( KiB/s): min=124928, max=268288, per=9.87%, avg=196198.40, stdev=44307.38, samples=20 00:24:54.055 iops : min= 488, max= 1048, avg=766.40, stdev=173.08, samples=20 00:24:54.055 lat (msec) : 10=0.09%, 20=1.20%, 50=8.63%, 100=64.54%, 250=25.53% 00:24:54.055 cpu : usr=1.79%, sys=2.38%, ctx=2922, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,7727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job1: (groupid=0, jobs=1): err= 0: pid=2916257: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=572, BW=143MiB/s (150MB/s)(1453MiB/10143msec); 0 zone resets 00:24:54.055 slat (usec): min=27, max=46981, avg=1646.43, stdev=3161.11 00:24:54.055 clat (msec): min=27, max=311, avg=109.81, stdev=30.75 00:24:54.055 lat (msec): min=27, max=311, avg=111.46, stdev=31.02 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 47], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 86], 00:24:54.055 | 30.00th=[ 92], 40.00th=[ 101], 50.00th=[ 106], 60.00th=[ 111], 00:24:54.055 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 159], 95.00th=[ 171], 00:24:54.055 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 305], 99.95th=[ 305], 00:24:54.055 | 99.99th=[ 313] 00:24:54.055 bw ( KiB/s): min=94208, max=199680, per=7.41%, avg=147148.80, stdev=32059.89, samples=20 00:24:54.055 iops : min= 368, max= 780, avg=574.80, stdev=125.23, samples=20 00:24:54.055 lat (msec) : 50=1.27%, 100=38.08%, 250=60.27%, 500=0.38% 00:24:54.055 cpu : usr=1.40%, sys=1.68%, ctx=1675, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,5811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job2: (groupid=0, jobs=1): err= 0: pid=2916263: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=631, BW=158MiB/s (165MB/s)(1600MiB/10139msec); 0 zone resets 00:24:54.055 slat (usec): min=24, max=59999, avg=1426.52, stdev=3059.65 00:24:54.055 clat (msec): min=4, max=298, avg=99.85, stdev=40.09 00:24:54.055 lat (msec): min=5, max=298, avg=101.28, stdev=40.68 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 24], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 67], 00:24:54.055 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 106], 00:24:54.055 | 70.00th=[ 112], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 171], 00:24:54.055 | 99.00th=[ 186], 99.50th=[ 220], 99.90th=[ 279], 99.95th=[ 288], 00:24:54.055 | 99.99th=[ 300] 00:24:54.055 bw ( KiB/s): min=94208, max=263168, per=8.16%, avg=162212.10, stdev=56396.09, samples=20 00:24:54.055 iops : min= 368, max= 1028, avg=633.60, stdev=220.34, samples=20 00:24:54.055 lat (msec) : 10=0.14%, 20=0.64%, 50=2.39%, 100=51.49%, 250=45.05% 00:24:54.055 lat (msec) : 500=0.28% 00:24:54.055 cpu : usr=1.57%, sys=1.90%, ctx=2185, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,6399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job3: (groupid=0, jobs=1): err= 0: pid=2916264: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=839, BW=210MiB/s (220MB/s)(2112MiB/10064msec); 0 zone resets 00:24:54.055 slat (usec): min=25, max=14155, avg=1089.63, stdev=2042.07 00:24:54.055 clat (msec): min=2, max=144, avg=75.13, stdev=18.17 00:24:54.055 lat (msec): min=2, max=144, avg=76.22, stdev=18.44 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 57], 20.00th=[ 64], 00:24:54.055 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 81], 00:24:54.055 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 103], 00:24:54.055 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 138], 00:24:54.055 | 99.99th=[ 146] 00:24:54.055 bw ( KiB/s): min=157696, max=278016, per=10.80%, avg=214656.00, stdev=34321.44, samples=20 00:24:54.055 iops : min= 616, max= 1086, avg=838.50, stdev=134.07, samples=20 00:24:54.055 lat (msec) : 4=0.06%, 10=0.41%, 20=1.61%, 50=5.98%, 100=84.43% 00:24:54.055 lat (msec) : 250=7.50% 00:24:54.055 cpu : usr=2.05%, sys=2.47%, ctx=2838, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,8448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job4: (groupid=0, jobs=1): err= 0: pid=2916265: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=905, BW=226MiB/s (237MB/s)(2278MiB/10063msec); 0 zone resets 00:24:54.055 slat (usec): min=23, max=17136, avg=1054.91, stdev=1877.38 00:24:54.055 clat (msec): min=8, max=132, avg=69.61, stdev=12.29 00:24:54.055 lat (msec): min=8, max=132, avg=70.66, stdev=12.43 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 39], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 60], 00:24:54.055 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 73], 00:24:54.055 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 86], 00:24:54.055 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 129], 00:24:54.055 | 99.99th=[ 133] 00:24:54.055 bw ( KiB/s): min=186880, max=272896, per=11.66%, avg=231654.40, stdev=29167.44, samples=20 00:24:54.055 iops : min= 730, max= 1066, avg=904.90, stdev=113.94, samples=20 00:24:54.055 lat (msec) : 10=0.01%, 20=0.16%, 50=1.74%, 100=96.65%, 250=1.43% 00:24:54.055 cpu : usr=2.08%, sys=3.03%, ctx=2545, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,9112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job5: (groupid=0, jobs=1): err= 0: pid=2916266: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=743, BW=186MiB/s (195MB/s)(1867MiB/10043msec); 0 zone resets 00:24:54.055 slat (usec): min=17, max=34512, avg=1219.38, stdev=2603.78 00:24:54.055 clat (msec): min=9, max=186, avg=84.76, stdev=37.71 00:24:54.055 lat (msec): min=11, max=186, avg=85.97, stdev=38.25 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 55], 00:24:54.055 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 82], 00:24:54.055 | 70.00th=[ 95], 80.00th=[ 114], 90.00th=[ 142], 95.00th=[ 169], 00:24:54.055 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 186], 00:24:54.055 | 99.99th=[ 186] 00:24:54.055 bw ( KiB/s): min=94208, max=329728, per=9.54%, avg=189568.00, stdev=67097.79, samples=20 00:24:54.055 iops : min= 368, max= 1288, avg=740.50, stdev=262.10, samples=20 00:24:54.055 lat (msec) : 10=0.01%, 20=1.49%, 50=13.67%, 100=56.41%, 250=28.41% 00:24:54.055 cpu : usr=1.78%, sys=2.41%, ctx=2564, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,7468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job6: (groupid=0, jobs=1): err= 0: pid=2916267: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=638, BW=160MiB/s (167MB/s)(1619MiB/10140msec); 0 zone resets 00:24:54.055 slat (usec): min=22, max=37077, avg=1436.29, stdev=2846.97 00:24:54.055 clat (msec): min=2, max=303, avg=98.72, stdev=34.66 00:24:54.055 lat (msec): min=3, max=303, avg=100.15, stdev=35.11 00:24:54.055 clat percentiles (msec): 00:24:54.055 | 1.00th=[ 25], 5.00th=[ 69], 10.00th=[ 75], 20.00th=[ 79], 00:24:54.055 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 91], 00:24:54.055 | 70.00th=[ 104], 80.00th=[ 116], 90.00th=[ 161], 95.00th=[ 171], 00:24:54.055 | 99.00th=[ 184], 99.50th=[ 222], 99.90th=[ 279], 99.95th=[ 296], 00:24:54.055 | 99.99th=[ 305] 00:24:54.055 bw ( KiB/s): min=91136, max=227840, per=8.26%, avg=164198.40, stdev=42993.59, samples=20 00:24:54.055 iops : min= 356, max= 890, avg=641.40, stdev=167.94, samples=20 00:24:54.055 lat (msec) : 4=0.03%, 10=0.20%, 20=0.65%, 50=1.85%, 100=65.08% 00:24:54.055 lat (msec) : 250=31.91%, 500=0.28% 00:24:54.055 cpu : usr=1.41%, sys=1.95%, ctx=2100, majf=0, minf=1 00:24:54.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.055 issued rwts: total=0,6477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.055 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.055 job7: (groupid=0, jobs=1): err= 0: pid=2916268: Fri Jun 7 23:21:16 2024 00:24:54.055 write: IOPS=705, BW=176MiB/s (185MB/s)(1789MiB/10146msec); 0 zone resets 00:24:54.055 slat (usec): min=24, max=28079, avg=1299.60, stdev=2749.63 00:24:54.055 clat (msec): min=2, max=298, avg=89.40, stdev=45.27 00:24:54.055 lat (msec): min=2, max=298, avg=90.70, stdev=45.92 00:24:54.055 clat percentiles (msec): 00:24:54.056 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:24:54.056 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 77], 60.00th=[ 105], 00:24:54.056 | 70.00th=[ 111], 80.00th=[ 132], 90.00th=[ 163], 95.00th=[ 171], 00:24:54.056 | 99.00th=[ 184], 99.50th=[ 211], 99.90th=[ 279], 99.95th=[ 288], 00:24:54.056 | 99.99th=[ 300] 00:24:54.056 bw ( KiB/s): min=94208, max=347136, per=9.14%, avg=181555.20, stdev=84153.04, samples=20 00:24:54.056 iops : min= 368, max= 1356, avg=709.20, stdev=328.72, samples=20 00:24:54.056 lat (msec) : 4=0.03%, 10=0.52%, 20=0.81%, 50=22.05%, 100=33.08% 00:24:54.056 lat (msec) : 250=43.26%, 500=0.25% 00:24:54.056 cpu : usr=1.57%, sys=2.16%, ctx=2356, majf=0, minf=1 00:24:54.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.056 issued rwts: total=0,7156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.056 job8: (groupid=0, jobs=1): err= 0: pid=2916269: Fri Jun 7 23:21:16 2024 00:24:54.056 write: IOPS=666, BW=167MiB/s (175MB/s)(1680MiB/10082msec); 0 zone resets 00:24:54.056 slat (usec): min=21, max=27581, avg=1331.89, stdev=2676.27 00:24:54.056 clat (msec): min=6, max=175, avg=94.67, stdev=28.11 00:24:54.056 lat (msec): min=6, max=178, avg=96.00, stdev=28.53 00:24:54.056 clat percentiles (msec): 00:24:54.056 | 1.00th=[ 23], 5.00th=[ 46], 10.00th=[ 61], 20.00th=[ 68], 00:24:54.056 | 30.00th=[ 82], 40.00th=[ 92], 50.00th=[ 101], 60.00th=[ 105], 00:24:54.056 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 126], 95.00th=[ 140], 00:24:54.056 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 174], 99.95th=[ 176], 00:24:54.056 | 99.99th=[ 176] 00:24:54.056 bw ( KiB/s): min=114688, max=268288, per=8.58%, avg=170405.55, stdev=41387.95, samples=20 00:24:54.056 iops : min= 448, max= 1048, avg=665.60, stdev=161.73, samples=20 00:24:54.056 lat (msec) : 10=0.07%, 20=0.68%, 50=5.05%, 100=43.88%, 250=50.32% 00:24:54.056 cpu : usr=1.63%, sys=2.03%, ctx=2428, majf=0, minf=1 00:24:54.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.056 issued rwts: total=0,6719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.056 job9: (groupid=0, jobs=1): err= 0: pid=2916270: Fri Jun 7 23:21:16 2024 00:24:54.056 write: IOPS=663, BW=166MiB/s (174MB/s)(1681MiB/10140msec); 0 zone resets 00:24:54.056 slat (usec): min=18, max=90563, avg=1371.25, stdev=3207.06 00:24:54.056 clat (msec): min=5, max=315, avg=95.09, stdev=34.05 00:24:54.056 lat (msec): min=8, max=315, avg=96.46, stdev=34.38 00:24:54.056 clat percentiles (msec): 00:24:54.056 | 1.00th=[ 29], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 65], 00:24:54.056 | 30.00th=[ 69], 40.00th=[ 84], 50.00th=[ 100], 60.00th=[ 106], 00:24:54.056 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 136], 95.00th=[ 155], 00:24:54.056 | 99.00th=[ 190], 99.50th=[ 220], 99.90th=[ 296], 99.95th=[ 309], 00:24:54.056 | 99.99th=[ 317] 00:24:54.056 bw ( KiB/s): min=100352, max=264192, per=8.58%, avg=170547.20, stdev=43450.32, samples=20 00:24:54.056 iops : min= 392, max= 1032, avg=666.20, stdev=169.73, samples=20 00:24:54.056 lat (msec) : 10=0.04%, 20=0.43%, 50=3.51%, 100=46.59%, 250=49.04% 00:24:54.056 lat (msec) : 500=0.39% 00:24:54.056 cpu : usr=1.42%, sys=2.20%, ctx=2127, majf=0, minf=1 00:24:54.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.056 issued rwts: total=0,6725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.056 job10: (groupid=0, jobs=1): err= 0: pid=2916271: Fri Jun 7 23:21:16 2024 00:24:54.056 write: IOPS=665, BW=166MiB/s (174MB/s)(1678MiB/10082msec); 0 zone resets 00:24:54.056 slat (usec): min=27, max=24177, avg=1455.91, stdev=2587.50 00:24:54.056 clat (msec): min=15, max=167, avg=94.66, stdev=18.41 00:24:54.056 lat (msec): min=15, max=167, avg=96.12, stdev=18.55 00:24:54.056 clat percentiles (msec): 00:24:54.056 | 1.00th=[ 43], 5.00th=[ 68], 10.00th=[ 75], 20.00th=[ 80], 00:24:54.056 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 102], 00:24:54.056 | 70.00th=[ 106], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 123], 00:24:54.056 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 163], 00:24:54.056 | 99.99th=[ 167] 00:24:54.056 bw ( KiB/s): min=124928, max=223744, per=8.56%, avg=170188.80, stdev=26885.66, samples=20 00:24:54.056 iops : min= 488, max= 874, avg=664.80, stdev=105.02, samples=20 00:24:54.056 lat (msec) : 20=0.15%, 50=1.10%, 100=56.30%, 250=42.45% 00:24:54.056 cpu : usr=1.73%, sys=2.05%, ctx=1862, majf=0, minf=1 00:24:54.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.056 issued rwts: total=0,6711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.056 00:24:54.056 Run status group 0 (all jobs): 00:24:54.056 WRITE: bw=1940MiB/s (2035MB/s), 143MiB/s-226MiB/s (150MB/s-237MB/s), io=19.2GiB (20.6GB), run=10043-10146msec 00:24:54.056 00:24:54.056 Disk stats (read/write): 00:24:54.056 nvme0n1: ios=46/14931, merge=0/0, ticks=1637/1197555, in_queue=1199192, util=100.00% 00:24:54.056 nvme10n1: ios=43/11574, merge=0/0, ticks=928/1222787, in_queue=1223715, util=100.00% 00:24:54.056 nvme1n1: ios=34/12753, merge=0/0, ticks=1721/1226525, in_queue=1228246, util=99.97% 00:24:54.056 nvme2n1: ios=24/16351, merge=0/0, ticks=212/1203207, in_queue=1203419, util=98.69% 00:24:54.056 nvme3n1: ios=13/17788, merge=0/0, ticks=362/1201414, in_queue=1201776, util=97.47% 00:24:54.056 nvme4n1: ios=54/14521, merge=0/0, ticks=1527/1202539, in_queue=1204066, util=99.96% 00:24:54.056 nvme5n1: ios=0/12908, merge=0/0, ticks=0/1228249, in_queue=1228249, util=97.98% 00:24:54.056 nvme6n1: ios=0/14253, merge=0/0, ticks=0/1227560, in_queue=1227560, util=98.15% 00:24:54.056 nvme7n1: ios=0/13102, merge=0/0, ticks=0/1202743, in_queue=1202743, util=98.61% 00:24:54.056 nvme8n1: ios=44/13404, merge=0/0, ticks=1774/1211534, in_queue=1213308, util=100.00% 00:24:54.056 nvme9n1: ios=0/13082, merge=0/0, ticks=0/1198009, in_queue=1198009, util=99.05% 00:24:54.056 23:21:16 -- target/multiconnection.sh@36 -- # sync 00:24:54.056 23:21:16 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:54.056 23:21:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.056 23:21:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.056 23:21:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:54.056 23:21:16 -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.056 23:21:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:54.056 23:21:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:24:54.056 23:21:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:54.056 23:21:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:54.056 23:21:16 -- common/autotest_common.sh@1210 -- # return 0 00:24:54.056 23:21:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.056 23:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.056 23:21:16 -- common/autotest_common.sh@10 -- # set +x 00:24:54.056 23:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.056 23:21:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.056 23:21:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.056 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.056 23:21:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:54.056 23:21:16 -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.056 23:21:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:54.056 23:21:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:24:54.056 23:21:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:54.056 23:21:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:54.056 23:21:16 -- common/autotest_common.sh@1210 -- # return 0 00:24:54.056 23:21:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.056 23:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.056 23:21:16 -- common/autotest_common.sh@10 -- # set +x 00:24:54.056 23:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.056 23:21:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.056 23:21:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:54.629 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:54.629 23:21:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:54.629 23:21:17 -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.629 23:21:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:54.629 23:21:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:24:54.629 23:21:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:54.629 23:21:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:54.629 23:21:17 -- common/autotest_common.sh@1210 -- # return 0 00:24:54.629 23:21:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:54.629 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.629 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:24:54.629 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.629 23:21:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.629 23:21:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:54.890 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:54.890 23:21:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:54.890 23:21:17 -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.890 23:21:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:54.890 23:21:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:24:54.890 23:21:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:54.891 23:21:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:54.891 23:21:17 -- common/autotest_common.sh@1210 -- # return 0 00:24:54.891 23:21:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:54.891 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.891 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:24:54.891 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.891 23:21:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.891 23:21:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:55.151 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:55.151 23:21:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:55.151 23:21:17 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.151 23:21:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.151 23:21:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:24:55.151 23:21:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.151 23:21:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:55.151 23:21:17 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.152 23:21:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:55.152 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.152 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:24:55.152 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.152 23:21:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.152 23:21:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:55.412 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:55.412 23:21:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:55.412 23:21:17 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.412 23:21:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.412 23:21:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:24:55.412 23:21:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.412 23:21:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:55.412 23:21:17 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.412 23:21:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:55.412 23:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.412 23:21:17 -- common/autotest_common.sh@10 -- # set +x 00:24:55.412 23:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.412 23:21:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.412 23:21:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:55.412 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:55.412 23:21:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:55.412 23:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.412 23:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.412 23:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:24:55.412 23:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.412 23:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:55.672 23:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.672 23:21:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:55.672 23:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.672 23:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:55.673 23:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.673 23:21:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.673 23:21:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:55.673 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:55.673 23:21:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:55.673 23:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.673 23:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.673 23:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:24:55.673 23:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.673 23:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:55.673 23:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.673 23:21:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:55.673 23:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.673 23:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:55.673 23:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.673 23:21:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.673 23:21:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:55.933 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:55.933 23:21:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:55.933 23:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.933 23:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.933 23:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:24:55.933 23:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.933 23:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:55.933 23:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.933 23:21:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:55.933 23:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.933 23:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:55.933 23:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.933 23:21:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.933 23:21:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:56.238 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:56.238 23:21:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:56.238 23:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.238 23:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:56.238 23:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:24:56.238 23:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.238 23:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:56.238 23:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:24:56.238 23:21:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:56.238 23:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.238 23:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:56.238 23:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.238 23:21:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.238 23:21:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:56.238 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:56.238 23:21:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:56.238 23:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.238 23:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:56.238 23:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:24:56.238 23:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.238 23:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:56.238 23:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:24:56.238 23:21:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:56.238 23:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.238 23:21:18 -- common/autotest_common.sh@10 -- # set +x 00:24:56.238 23:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.238 23:21:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:56.238 23:21:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:56.238 23:21:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:56.238 23:21:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:56.238 23:21:18 -- nvmf/common.sh@116 -- # sync 00:24:56.238 23:21:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:56.238 23:21:18 -- nvmf/common.sh@119 -- # set +e 00:24:56.238 23:21:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:56.238 23:21:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:56.238 rmmod nvme_tcp 00:24:56.238 rmmod nvme_fabrics 00:24:56.238 rmmod nvme_keyring 00:24:56.238 23:21:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:56.238 23:21:18 -- nvmf/common.sh@123 -- # set -e 00:24:56.238 23:21:18 -- nvmf/common.sh@124 -- # return 0 00:24:56.238 23:21:18 -- nvmf/common.sh@477 -- # '[' -n 2905371 ']' 00:24:56.238 23:21:18 -- nvmf/common.sh@478 -- # killprocess 2905371 00:24:56.238 23:21:18 -- common/autotest_common.sh@926 -- # '[' -z 2905371 ']' 00:24:56.238 23:21:18 -- common/autotest_common.sh@930 -- # kill -0 2905371 00:24:56.238 23:21:18 -- common/autotest_common.sh@931 -- # uname 00:24:56.238 23:21:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:56.238 23:21:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2905371 00:24:56.238 23:21:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:56.238 23:21:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:56.238 23:21:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2905371' 00:24:56.238 killing process with pid 2905371 00:24:56.238 23:21:18 -- common/autotest_common.sh@945 -- # kill 2905371 00:24:56.238 23:21:18 -- common/autotest_common.sh@950 -- # wait 2905371 00:24:56.499 23:21:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:56.499 23:21:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:56.499 23:21:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:56.499 23:21:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.499 23:21:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:56.499 23:21:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.499 23:21:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.499 23:21:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.044 23:21:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:59.044 00:24:59.044 real 1m16.794s 00:24:59.044 user 4m54.880s 00:24:59.044 sys 0m22.615s 00:24:59.044 23:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.044 23:21:21 -- common/autotest_common.sh@10 -- # set +x 00:24:59.044 ************************************ 00:24:59.044 END TEST nvmf_multiconnection 00:24:59.044 ************************************ 00:24:59.044 23:21:21 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:59.044 23:21:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:59.044 23:21:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:59.044 23:21:21 -- common/autotest_common.sh@10 -- # set +x 00:24:59.044 ************************************ 00:24:59.044 START TEST nvmf_initiator_timeout 00:24:59.044 ************************************ 00:24:59.044 23:21:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:59.044 * Looking for test storage... 00:24:59.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.044 23:21:21 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.044 23:21:21 -- nvmf/common.sh@7 -- # uname -s 00:24:59.044 23:21:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.044 23:21:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.044 23:21:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.044 23:21:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.044 23:21:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.044 23:21:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.044 23:21:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.044 23:21:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.044 23:21:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.044 23:21:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.044 23:21:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.044 23:21:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:59.044 23:21:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.044 23:21:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.044 23:21:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.044 23:21:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.044 23:21:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.044 23:21:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.044 23:21:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.044 23:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.044 23:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.044 23:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.044 23:21:21 -- paths/export.sh@5 -- # export PATH 00:24:59.044 23:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.044 23:21:21 -- nvmf/common.sh@46 -- # : 0 00:24:59.044 23:21:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:59.044 23:21:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:59.044 23:21:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:59.044 23:21:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.044 23:21:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.044 23:21:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:59.044 23:21:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:59.044 23:21:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:59.044 23:21:21 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.044 23:21:21 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.044 23:21:21 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:59.044 23:21:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:59.044 23:21:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.044 23:21:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:59.044 23:21:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:59.044 23:21:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:59.044 23:21:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.044 23:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.044 23:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.044 23:21:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:59.044 23:21:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:59.044 23:21:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:59.044 23:21:21 -- common/autotest_common.sh@10 -- # set +x 00:25:07.184 23:21:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:07.184 23:21:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:07.184 23:21:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:07.184 23:21:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:07.184 23:21:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:07.184 23:21:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:07.184 23:21:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:07.184 23:21:28 -- nvmf/common.sh@294 -- # net_devs=() 00:25:07.184 23:21:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:07.184 23:21:28 -- nvmf/common.sh@295 -- # e810=() 00:25:07.184 23:21:28 -- nvmf/common.sh@295 -- # local -ga e810 00:25:07.184 23:21:28 -- nvmf/common.sh@296 -- # x722=() 00:25:07.184 23:21:28 -- nvmf/common.sh@296 -- # local -ga x722 00:25:07.184 23:21:28 -- nvmf/common.sh@297 -- # mlx=() 00:25:07.184 23:21:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:07.184 23:21:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.184 23:21:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:07.184 23:21:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:07.184 23:21:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:07.184 23:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:07.184 23:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:07.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:07.184 23:21:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:07.184 23:21:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:07.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:07.184 23:21:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:07.184 23:21:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:07.184 23:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:07.184 23:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.184 23:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:07.185 23:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.185 23:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:07.185 Found net devices under 0000:31:00.0: cvl_0_0 00:25:07.185 23:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.185 23:21:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:07.185 23:21:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.185 23:21:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:07.185 23:21:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.185 23:21:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:07.185 Found net devices under 0000:31:00.1: cvl_0_1 00:25:07.185 23:21:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.185 23:21:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:07.185 23:21:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:07.185 23:21:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:07.185 23:21:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:07.185 23:21:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:07.185 23:21:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.185 23:21:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.185 23:21:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.185 23:21:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:07.185 23:21:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.185 23:21:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.185 23:21:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:07.185 23:21:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.185 23:21:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.185 23:21:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:07.185 23:21:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:07.185 23:21:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.185 23:21:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.185 23:21:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.185 23:21:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.185 23:21:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:07.185 23:21:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.185 23:21:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.185 23:21:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.185 23:21:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:07.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:25:07.185 00:25:07.185 --- 10.0.0.2 ping statistics --- 00:25:07.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.185 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:25:07.185 23:21:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:25:07.185 00:25:07.185 --- 10.0.0.1 ping statistics --- 00:25:07.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.185 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:25:07.185 23:21:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.185 23:21:28 -- nvmf/common.sh@410 -- # return 0 00:25:07.185 23:21:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:07.185 23:21:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.185 23:21:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:07.185 23:21:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:07.185 23:21:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.185 23:21:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:07.185 23:21:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:07.185 23:21:28 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:07.185 23:21:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:07.185 23:21:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.185 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 23:21:28 -- nvmf/common.sh@469 -- # nvmfpid=2923445 00:25:07.185 23:21:28 -- nvmf/common.sh@470 -- # waitforlisten 2923445 00:25:07.185 23:21:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.185 23:21:28 -- common/autotest_common.sh@819 -- # '[' -z 2923445 ']' 00:25:07.185 23:21:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.185 23:21:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.185 23:21:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.185 23:21:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.185 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 [2024-06-07 23:21:28.738376] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:07.185 [2024-06-07 23:21:28.738438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.185 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.185 [2024-06-07 23:21:28.810022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.185 [2024-06-07 23:21:28.848306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:07.185 [2024-06-07 23:21:28.848449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.185 [2024-06-07 23:21:28.848460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.185 [2024-06-07 23:21:28.848469] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.185 [2024-06-07 23:21:28.848683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.185 [2024-06-07 23:21:28.848804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.185 [2024-06-07 23:21:28.848929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.185 [2024-06-07 23:21:28.848930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.185 23:21:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.185 23:21:29 -- common/autotest_common.sh@852 -- # return 0 00:25:07.185 23:21:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:07.185 23:21:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 23:21:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 Malloc0 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 Delay0 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 [2024-06-07 23:21:29.588726] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.185 23:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.185 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:25:07.185 [2024-06-07 23:21:29.628992] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.185 23:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.185 23:21:29 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:08.569 23:21:31 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:08.569 23:21:31 -- common/autotest_common.sh@1177 -- # local i=0 00:25:08.569 23:21:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.569 23:21:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:08.569 23:21:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:10.489 23:21:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:10.489 23:21:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:10.489 23:21:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:10.489 23:21:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:10.489 23:21:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.489 23:21:33 -- common/autotest_common.sh@1187 -- # return 0 00:25:10.489 23:21:33 -- target/initiator_timeout.sh@35 -- # fio_pid=2924429 00:25:10.489 23:21:33 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:10.489 23:21:33 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:10.489 [global] 00:25:10.489 thread=1 00:25:10.489 invalidate=1 00:25:10.489 rw=write 00:25:10.489 time_based=1 00:25:10.489 runtime=60 00:25:10.489 ioengine=libaio 00:25:10.489 direct=1 00:25:10.489 bs=4096 00:25:10.489 iodepth=1 00:25:10.489 norandommap=0 00:25:10.489 numjobs=1 00:25:10.489 00:25:10.489 verify_dump=1 00:25:10.489 verify_backlog=512 00:25:10.489 verify_state_save=0 00:25:10.489 do_verify=1 00:25:10.489 verify=crc32c-intel 00:25:10.489 [job0] 00:25:10.489 filename=/dev/nvme0n1 00:25:10.489 Could not set queue depth (nvme0n1) 00:25:11.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:11.085 fio-3.35 00:25:11.085 Starting 1 thread 00:25:13.628 23:21:36 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:13.628 23:21:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.629 23:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.629 true 00:25:13.629 23:21:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.629 23:21:36 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:13.629 23:21:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.629 23:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.629 true 00:25:13.629 23:21:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.629 23:21:36 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:13.629 23:21:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.629 23:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.629 true 00:25:13.629 23:21:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.629 23:21:36 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:13.629 23:21:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.629 23:21:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.629 true 00:25:13.629 23:21:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.629 23:21:36 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:17.001 23:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.001 23:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:17.001 true 00:25:17.001 23:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:17.001 23:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.001 23:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:17.001 true 00:25:17.001 23:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:17.001 23:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.001 23:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:17.001 true 00:25:17.001 23:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:17.001 23:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.001 23:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:17.001 true 00:25:17.001 23:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:17.001 23:21:39 -- target/initiator_timeout.sh@54 -- # wait 2924429 00:26:13.267 00:26:13.267 job0: (groupid=0, jobs=1): err= 0: pid=2924654: Fri Jun 7 23:22:33 2024 00:26:13.267 read: IOPS=119, BW=478KiB/s (489kB/s)(28.0MiB/60001msec) 00:26:13.267 slat (nsec): min=6490, max=64110, avg=25551.20, stdev=5415.96 00:26:13.267 clat (usec): min=306, max=43028, avg=1876.22, stdev=6242.38 00:26:13.267 lat (usec): min=330, max=43054, avg=1901.77, stdev=6242.51 00:26:13.267 clat percentiles (usec): 00:26:13.267 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 717], 20.00th=[ 816], 00:26:13.267 | 30.00th=[ 865], 40.00th=[ 922], 50.00th=[ 955], 60.00th=[ 971], 00:26:13.267 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:26:13.267 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:26:13.267 | 99.99th=[43254] 00:26:13.267 write: IOPS=123, BW=493KiB/s (505kB/s)(28.9MiB/60001msec); 0 zone resets 00:26:13.267 slat (usec): min=9, max=40156, avg=52.03, stdev=850.32 00:26:13.267 clat (usec): min=172, max=42039k, avg=6206.01, stdev=488987.96 00:26:13.267 lat (usec): min=182, max=42039k, avg=6258.04, stdev=488988.49 00:26:13.267 clat percentiles (usec): 00:26:13.267 | 1.00th=[ 302], 5.00th=[ 392], 10.00th=[ 416], 00:26:13.267 | 20.00th=[ 441], 30.00th=[ 482], 40.00th=[ 515], 00:26:13.267 | 50.00th=[ 529], 60.00th=[ 545], 70.00th=[ 553], 00:26:13.267 | 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 660], 00:26:13.267 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 766], 00:26:13.267 | 99.95th=[ 783], 99.99th=[17112761] 00:26:13.267 bw ( KiB/s): min= 384, max= 4096, per=100.00%, avg=2730.67, stdev=1351.09, samples=21 00:26:13.267 iops : min= 96, max= 1024, avg=682.67, stdev=337.77, samples=21 00:26:13.267 lat (usec) : 250=0.21%, 500=17.17%, 750=39.64%, 1000=32.68% 00:26:13.267 lat (msec) : 2=9.12%, 50=1.17%, >=2000=0.01% 00:26:13.267 cpu : usr=0.29%, sys=0.76%, ctx=14570, majf=0, minf=1 00:26:13.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.267 issued rwts: total=7168,7391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:13.267 00:26:13.267 Run status group 0 (all jobs): 00:26:13.267 READ: bw=478KiB/s (489kB/s), 478KiB/s-478KiB/s (489kB/s-489kB/s), io=28.0MiB (29.4MB), run=60001-60001msec 00:26:13.267 WRITE: bw=493KiB/s (505kB/s), 493KiB/s-493KiB/s (505kB/s-505kB/s), io=28.9MiB (30.3MB), run=60001-60001msec 00:26:13.267 00:26:13.267 Disk stats (read/write): 00:26:13.267 nvme0n1: ios=7216/7220, merge=0/0, ticks=13665/3576, in_queue=17241, util=99.64% 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:13.267 23:22:33 -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.267 23:22:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:13.267 23:22:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.267 23:22:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:13.267 23:22:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.267 23:22:33 -- common/autotest_common.sh@1210 -- # return 0 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:13.267 nvmf hotplug test: fio successful as expected 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.267 23:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.267 23:22:33 -- common/autotest_common.sh@10 -- # set +x 00:26:13.267 23:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:13.267 23:22:33 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:13.267 23:22:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:13.267 23:22:33 -- nvmf/common.sh@116 -- # sync 00:26:13.267 23:22:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.267 23:22:33 -- nvmf/common.sh@119 -- # set +e 00:26:13.267 23:22:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.267 23:22:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.267 rmmod nvme_tcp 00:26:13.267 rmmod nvme_fabrics 00:26:13.267 rmmod nvme_keyring 00:26:13.267 23:22:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.267 23:22:33 -- nvmf/common.sh@123 -- # set -e 00:26:13.267 23:22:33 -- nvmf/common.sh@124 -- # return 0 00:26:13.267 23:22:33 -- nvmf/common.sh@477 -- # '[' -n 2923445 ']' 00:26:13.267 23:22:33 -- nvmf/common.sh@478 -- # killprocess 2923445 00:26:13.267 23:22:33 -- common/autotest_common.sh@926 -- # '[' -z 2923445 ']' 00:26:13.267 23:22:33 -- common/autotest_common.sh@930 -- # kill -0 2923445 00:26:13.267 23:22:33 -- common/autotest_common.sh@931 -- # uname 00:26:13.267 23:22:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.267 23:22:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2923445 00:26:13.267 23:22:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.267 23:22:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.267 23:22:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2923445' 00:26:13.267 killing process with pid 2923445 00:26:13.267 23:22:33 -- common/autotest_common.sh@945 -- # kill 2923445 00:26:13.267 23:22:33 -- common/autotest_common.sh@950 -- # wait 2923445 00:26:13.267 23:22:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.267 23:22:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.267 23:22:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.267 23:22:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.267 23:22:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.267 23:22:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.267 23:22:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.267 23:22:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.528 23:22:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:13.528 00:26:13.528 real 1m14.784s 00:26:13.528 user 4m36.206s 00:26:13.528 sys 0m7.697s 00:26:13.528 23:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.528 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:13.528 ************************************ 00:26:13.528 END TEST nvmf_initiator_timeout 00:26:13.528 ************************************ 00:26:13.528 23:22:36 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:13.528 23:22:36 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:13.528 23:22:36 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:13.528 23:22:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:13.528 23:22:36 -- common/autotest_common.sh@10 -- # set +x 00:26:21.672 23:22:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:21.672 23:22:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:21.672 23:22:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:21.672 23:22:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:21.672 23:22:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:21.672 23:22:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:21.672 23:22:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:21.672 23:22:42 -- nvmf/common.sh@294 -- # net_devs=() 00:26:21.672 23:22:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:21.672 23:22:42 -- nvmf/common.sh@295 -- # e810=() 00:26:21.672 23:22:42 -- nvmf/common.sh@295 -- # local -ga e810 00:26:21.672 23:22:42 -- nvmf/common.sh@296 -- # x722=() 00:26:21.672 23:22:42 -- nvmf/common.sh@296 -- # local -ga x722 00:26:21.672 23:22:42 -- nvmf/common.sh@297 -- # mlx=() 00:26:21.672 23:22:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:21.672 23:22:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.672 23:22:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:21.672 23:22:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:21.672 23:22:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:21.672 23:22:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.672 23:22:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:21.672 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:21.672 23:22:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.672 23:22:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:21.672 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:21.672 23:22:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:21.672 23:22:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:21.672 23:22:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.672 23:22:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.672 23:22:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.672 23:22:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.672 23:22:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:21.672 Found net devices under 0000:31:00.0: cvl_0_0 00:26:21.672 23:22:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.672 23:22:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.672 23:22:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.672 23:22:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.672 23:22:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.672 23:22:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:21.672 Found net devices under 0000:31:00.1: cvl_0_1 00:26:21.672 23:22:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.672 23:22:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:21.672 23:22:42 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.672 23:22:42 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:21.672 23:22:42 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:21.672 23:22:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:21.673 23:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.673 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:26:21.673 ************************************ 00:26:21.673 START TEST nvmf_perf_adq 00:26:21.673 ************************************ 00:26:21.673 23:22:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:21.673 * Looking for test storage... 00:26:21.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:21.673 23:22:43 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.673 23:22:43 -- nvmf/common.sh@7 -- # uname -s 00:26:21.673 23:22:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.673 23:22:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.673 23:22:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.673 23:22:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.673 23:22:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.673 23:22:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.673 23:22:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.673 23:22:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.673 23:22:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.673 23:22:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.673 23:22:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.673 23:22:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.673 23:22:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.673 23:22:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.673 23:22:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.673 23:22:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.673 23:22:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.673 23:22:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.673 23:22:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.673 23:22:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.673 23:22:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.673 23:22:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.673 23:22:43 -- paths/export.sh@5 -- # export PATH 00:26:21.673 23:22:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.673 23:22:43 -- nvmf/common.sh@46 -- # : 0 00:26:21.673 23:22:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:21.673 23:22:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:21.673 23:22:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:21.673 23:22:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.673 23:22:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.673 23:22:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:21.673 23:22:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:21.673 23:22:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:21.673 23:22:43 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:21.673 23:22:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:21.673 23:22:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 23:22:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:28.258 23:22:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:28.258 23:22:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:28.258 23:22:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:28.258 23:22:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:28.258 23:22:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:28.258 23:22:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:28.258 23:22:50 -- nvmf/common.sh@294 -- # net_devs=() 00:26:28.258 23:22:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:28.258 23:22:50 -- nvmf/common.sh@295 -- # e810=() 00:26:28.258 23:22:50 -- nvmf/common.sh@295 -- # local -ga e810 00:26:28.258 23:22:50 -- nvmf/common.sh@296 -- # x722=() 00:26:28.258 23:22:50 -- nvmf/common.sh@296 -- # local -ga x722 00:26:28.258 23:22:50 -- nvmf/common.sh@297 -- # mlx=() 00:26:28.258 23:22:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:28.258 23:22:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.258 23:22:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.259 23:22:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.259 23:22:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.259 23:22:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.259 23:22:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.259 23:22:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:28.259 23:22:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:28.259 23:22:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:28.259 23:22:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.259 23:22:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.259 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.259 23:22:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.259 23:22:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.259 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.259 23:22:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:28.259 23:22:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:28.259 23:22:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.259 23:22:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.259 23:22:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.259 23:22:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.259 23:22:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.259 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.259 23:22:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.259 23:22:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.259 23:22:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.259 23:22:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.259 23:22:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.259 23:22:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.259 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.259 23:22:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.259 23:22:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:28.259 23:22:50 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.259 23:22:50 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:28.259 23:22:50 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:28.259 23:22:50 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:28.259 23:22:50 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:29.203 23:22:51 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:31.115 23:22:53 -- target/perf_adq.sh@54 -- # sleep 5 00:26:36.403 23:22:58 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:36.403 23:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:36.403 23:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.403 23:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:36.403 23:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:36.403 23:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:36.403 23:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.403 23:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.403 23:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.403 23:22:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:36.403 23:22:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:36.403 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:36.403 23:22:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:36.403 23:22:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:36.403 23:22:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:36.403 23:22:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:36.403 23:22:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:36.403 23:22:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:36.403 23:22:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:36.403 23:22:58 -- nvmf/common.sh@294 -- # net_devs=() 00:26:36.403 23:22:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:36.403 23:22:58 -- nvmf/common.sh@295 -- # e810=() 00:26:36.403 23:22:58 -- nvmf/common.sh@295 -- # local -ga e810 00:26:36.403 23:22:58 -- nvmf/common.sh@296 -- # x722=() 00:26:36.403 23:22:58 -- nvmf/common.sh@296 -- # local -ga x722 00:26:36.403 23:22:58 -- nvmf/common.sh@297 -- # mlx=() 00:26:36.403 23:22:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:36.403 23:22:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.403 23:22:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:36.403 23:22:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:36.403 23:22:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:36.403 23:22:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:36.403 23:22:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:36.403 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:36.403 23:22:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:36.403 23:22:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:36.403 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:36.403 23:22:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:36.403 23:22:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:36.403 23:22:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.403 23:22:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:36.403 23:22:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.403 23:22:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:36.403 Found net devices under 0000:31:00.0: cvl_0_0 00:26:36.403 23:22:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.403 23:22:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:36.403 23:22:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.403 23:22:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:36.403 23:22:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.403 23:22:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:36.403 Found net devices under 0000:31:00.1: cvl_0_1 00:26:36.403 23:22:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.403 23:22:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:36.403 23:22:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:36.403 23:22:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:36.403 23:22:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:36.403 23:22:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.404 23:22:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.404 23:22:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.404 23:22:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:36.404 23:22:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.404 23:22:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.404 23:22:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:36.404 23:22:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.404 23:22:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.404 23:22:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:36.404 23:22:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:36.404 23:22:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.404 23:22:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.404 23:22:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.404 23:22:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.404 23:22:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:36.404 23:22:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.404 23:22:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.404 23:22:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.404 23:22:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:36.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:26:36.404 00:26:36.404 --- 10.0.0.2 ping statistics --- 00:26:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.404 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:26:36.404 23:22:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:26:36.404 00:26:36.404 --- 10.0.0.1 ping statistics --- 00:26:36.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.404 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:36.404 23:22:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.404 23:22:58 -- nvmf/common.sh@410 -- # return 0 00:26:36.404 23:22:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:36.404 23:22:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.404 23:22:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:36.404 23:22:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:36.404 23:22:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.404 23:22:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:36.404 23:22:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:36.404 23:22:58 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:36.404 23:22:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:36.404 23:22:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:36.404 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:36.404 23:22:58 -- nvmf/common.sh@469 -- # nvmfpid=2945957 00:26:36.404 23:22:58 -- nvmf/common.sh@470 -- # waitforlisten 2945957 00:26:36.404 23:22:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:36.404 23:22:58 -- common/autotest_common.sh@819 -- # '[' -z 2945957 ']' 00:26:36.404 23:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.404 23:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:36.404 23:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.404 23:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:36.404 23:22:58 -- common/autotest_common.sh@10 -- # set +x 00:26:36.404 [2024-06-07 23:22:58.851299] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:36.404 [2024-06-07 23:22:58.851350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.404 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.404 [2024-06-07 23:22:58.920127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.404 [2024-06-07 23:22:58.950104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:36.404 [2024-06-07 23:22:58.950239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.404 [2024-06-07 23:22:58.950255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.404 [2024-06-07 23:22:58.950268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.404 [2024-06-07 23:22:58.950364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.404 [2024-06-07 23:22:58.950592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.404 [2024-06-07 23:22:58.950768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.404 [2024-06-07 23:22:58.950769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.404 23:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.404 23:22:59 -- common/autotest_common.sh@852 -- # return 0 00:26:36.404 23:22:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:36.404 23:22:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:36.404 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.404 23:22:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.404 23:22:59 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:36.404 23:22:59 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:36.404 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.404 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.404 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.404 23:22:59 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:36.404 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.404 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:36.665 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.665 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 [2024-06-07 23:22:59.137171] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:36.665 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.665 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 Malloc1 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.665 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.665 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:36.665 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.665 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.665 23:22:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.665 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:26:36.665 [2024-06-07 23:22:59.196514] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.665 23:22:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.665 23:22:59 -- target/perf_adq.sh@73 -- # perfpid=2945986 00:26:36.665 23:22:59 -- target/perf_adq.sh@74 -- # sleep 2 00:26:36.665 23:22:59 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:36.665 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.578 23:23:01 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:38.578 23:23:01 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:38.578 23:23:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.578 23:23:01 -- target/perf_adq.sh@76 -- # wc -l 00:26:38.578 23:23:01 -- common/autotest_common.sh@10 -- # set +x 00:26:38.578 23:23:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.578 23:23:01 -- target/perf_adq.sh@76 -- # count=4 00:26:38.578 23:23:01 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:38.578 23:23:01 -- target/perf_adq.sh@81 -- # wait 2945986 00:26:46.718 Initializing NVMe Controllers 00:26:46.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:46.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:46.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:46.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:46.718 Initialization complete. Launching workers. 00:26:46.718 ======================================================== 00:26:46.718 Latency(us) 00:26:46.718 Device Information : IOPS MiB/s Average min max 00:26:46.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12543.26 49.00 5102.72 984.25 8455.87 00:26:46.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15580.68 60.86 4106.86 1288.98 10046.53 00:26:46.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14011.82 54.73 4567.10 936.96 45457.79 00:26:46.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15801.37 61.72 4049.73 869.76 10033.76 00:26:46.718 ======================================================== 00:26:46.718 Total : 57937.12 226.32 4418.19 869.76 45457.79 00:26:46.718 00:26:46.718 23:23:09 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:46.718 23:23:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:46.718 23:23:09 -- nvmf/common.sh@116 -- # sync 00:26:46.718 23:23:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:46.718 23:23:09 -- nvmf/common.sh@119 -- # set +e 00:26:46.718 23:23:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:46.718 23:23:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:46.718 rmmod nvme_tcp 00:26:46.981 rmmod nvme_fabrics 00:26:46.981 rmmod nvme_keyring 00:26:46.981 23:23:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:46.981 23:23:09 -- nvmf/common.sh@123 -- # set -e 00:26:46.981 23:23:09 -- nvmf/common.sh@124 -- # return 0 00:26:46.981 23:23:09 -- nvmf/common.sh@477 -- # '[' -n 2945957 ']' 00:26:46.981 23:23:09 -- nvmf/common.sh@478 -- # killprocess 2945957 00:26:46.981 23:23:09 -- common/autotest_common.sh@926 -- # '[' -z 2945957 ']' 00:26:46.982 23:23:09 -- common/autotest_common.sh@930 -- # kill -0 2945957 00:26:46.982 23:23:09 -- common/autotest_common.sh@931 -- # uname 00:26:46.982 23:23:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:46.982 23:23:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2945957 00:26:46.982 23:23:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:46.982 23:23:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:46.982 23:23:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2945957' 00:26:46.982 killing process with pid 2945957 00:26:46.982 23:23:09 -- common/autotest_common.sh@945 -- # kill 2945957 00:26:46.982 23:23:09 -- common/autotest_common.sh@950 -- # wait 2945957 00:26:46.982 23:23:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:46.982 23:23:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:46.982 23:23:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:46.982 23:23:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.982 23:23:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:46.982 23:23:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.982 23:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.982 23:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.528 23:23:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:49.528 23:23:11 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:49.528 23:23:11 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:50.913 23:23:13 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:52.828 23:23:15 -- target/perf_adq.sh@54 -- # sleep 5 00:26:58.233 23:23:20 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:58.233 23:23:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:58.233 23:23:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.233 23:23:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:58.233 23:23:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:58.233 23:23:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:58.233 23:23:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.233 23:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.233 23:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.233 23:23:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:58.233 23:23:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:58.233 23:23:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:58.233 23:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:58.233 23:23:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:58.233 23:23:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:58.233 23:23:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:58.233 23:23:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:58.233 23:23:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:58.233 23:23:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:58.233 23:23:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:58.233 23:23:20 -- nvmf/common.sh@294 -- # net_devs=() 00:26:58.233 23:23:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:58.233 23:23:20 -- nvmf/common.sh@295 -- # e810=() 00:26:58.233 23:23:20 -- nvmf/common.sh@295 -- # local -ga e810 00:26:58.233 23:23:20 -- nvmf/common.sh@296 -- # x722=() 00:26:58.233 23:23:20 -- nvmf/common.sh@296 -- # local -ga x722 00:26:58.233 23:23:20 -- nvmf/common.sh@297 -- # mlx=() 00:26:58.233 23:23:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:58.233 23:23:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.233 23:23:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.234 23:23:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:58.234 23:23:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:58.234 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:58.234 23:23:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:58.234 23:23:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:58.234 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:58.234 23:23:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:58.234 23:23:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.234 23:23:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.234 23:23:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:58.234 Found net devices under 0000:31:00.0: cvl_0_0 00:26:58.234 23:23:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:58.234 23:23:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.234 23:23:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.234 23:23:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:58.234 Found net devices under 0000:31:00.1: cvl_0_1 00:26:58.234 23:23:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:58.234 23:23:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:58.234 23:23:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.234 23:23:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.234 23:23:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:58.234 23:23:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.234 23:23:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.234 23:23:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:58.234 23:23:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.234 23:23:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.234 23:23:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:58.234 23:23:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:58.234 23:23:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.234 23:23:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.234 23:23:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.234 23:23:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.234 23:23:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:58.234 23:23:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.234 23:23:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.234 23:23:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.234 23:23:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:58.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:26:58.234 00:26:58.234 --- 10.0.0.2 ping statistics --- 00:26:58.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.234 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:26:58.234 23:23:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:26:58.234 00:26:58.234 --- 10.0.0.1 ping statistics --- 00:26:58.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.234 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:26:58.234 23:23:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.234 23:23:20 -- nvmf/common.sh@410 -- # return 0 00:26:58.234 23:23:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:58.234 23:23:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.234 23:23:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:58.234 23:23:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.234 23:23:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:58.234 23:23:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:58.234 23:23:20 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:58.234 23:23:20 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:58.234 23:23:20 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:58.234 23:23:20 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:58.234 net.core.busy_poll = 1 00:26:58.234 23:23:20 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:58.234 net.core.busy_read = 1 00:26:58.234 23:23:20 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:58.234 23:23:20 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:58.234 23:23:20 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:58.234 23:23:20 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:58.234 23:23:20 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:58.234 23:23:20 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:58.234 23:23:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:58.234 23:23:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:58.234 23:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:58.234 23:23:20 -- nvmf/common.sh@469 -- # nvmfpid=2950567 00:26:58.234 23:23:20 -- nvmf/common.sh@470 -- # waitforlisten 2950567 00:26:58.234 23:23:20 -- common/autotest_common.sh@819 -- # '[' -z 2950567 ']' 00:26:58.234 23:23:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:58.234 23:23:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.234 23:23:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.234 23:23:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.234 23:23:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.234 23:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:58.234 [2024-06-07 23:23:20.847942] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:58.234 [2024-06-07 23:23:20.848012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.496 [2024-06-07 23:23:20.922207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.496 [2024-06-07 23:23:20.961629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:58.496 [2024-06-07 23:23:20.961783] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.496 [2024-06-07 23:23:20.961793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.496 [2024-06-07 23:23:20.961801] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.496 [2024-06-07 23:23:20.961963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.496 [2024-06-07 23:23:20.962086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.496 [2024-06-07 23:23:20.962255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.496 [2024-06-07 23:23:20.962266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.070 23:23:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.070 23:23:21 -- common/autotest_common.sh@852 -- # return 0 00:26:59.070 23:23:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:59.070 23:23:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:59.070 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.070 23:23:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.070 23:23:21 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:59.070 23:23:21 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:59.070 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.070 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.070 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.070 23:23:21 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:59.070 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.070 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.070 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.070 23:23:21 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:59.070 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.070 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 [2024-06-07 23:23:21.754183] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.332 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.332 23:23:21 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:59.332 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.332 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 Malloc1 00:26:59.332 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.332 23:23:21 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.332 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.332 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.332 23:23:21 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:59.332 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.332 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.332 23:23:21 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.332 23:23:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.332 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.332 [2024-06-07 23:23:21.809520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.332 23:23:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.332 23:23:21 -- target/perf_adq.sh@94 -- # perfpid=2950861 00:26:59.332 23:23:21 -- target/perf_adq.sh@95 -- # sleep 2 00:26:59.332 23:23:21 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:59.332 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.251 23:23:23 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:27:01.251 23:23:23 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:01.251 23:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.251 23:23:23 -- target/perf_adq.sh@97 -- # wc -l 00:27:01.251 23:23:23 -- common/autotest_common.sh@10 -- # set +x 00:27:01.251 23:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.251 23:23:23 -- target/perf_adq.sh@97 -- # count=2 00:27:01.251 23:23:23 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:27:01.251 23:23:23 -- target/perf_adq.sh@103 -- # wait 2950861 00:27:09.394 Initializing NVMe Controllers 00:27:09.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:09.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:09.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:09.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:09.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:09.394 Initialization complete. Launching workers. 00:27:09.394 ======================================================== 00:27:09.394 Latency(us) 00:27:09.394 Device Information : IOPS MiB/s Average min max 00:27:09.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10232.20 39.97 6255.15 1228.47 49284.49 00:27:09.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12436.10 48.58 5145.93 1212.34 49042.81 00:27:09.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10563.90 41.27 6058.12 1116.09 52918.10 00:27:09.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9827.40 38.39 6531.88 1351.09 52192.77 00:27:09.394 ======================================================== 00:27:09.394 Total : 43059.58 168.20 5949.61 1116.09 52918.10 00:27:09.394 00:27:09.394 23:23:32 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:09.394 23:23:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:09.394 23:23:32 -- nvmf/common.sh@116 -- # sync 00:27:09.394 23:23:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:09.394 23:23:32 -- nvmf/common.sh@119 -- # set +e 00:27:09.394 23:23:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:09.394 23:23:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:09.394 rmmod nvme_tcp 00:27:09.394 rmmod nvme_fabrics 00:27:09.394 rmmod nvme_keyring 00:27:09.394 23:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:09.394 23:23:32 -- nvmf/common.sh@123 -- # set -e 00:27:09.394 23:23:32 -- nvmf/common.sh@124 -- # return 0 00:27:09.394 23:23:32 -- nvmf/common.sh@477 -- # '[' -n 2950567 ']' 00:27:09.394 23:23:32 -- nvmf/common.sh@478 -- # killprocess 2950567 00:27:09.394 23:23:32 -- common/autotest_common.sh@926 -- # '[' -z 2950567 ']' 00:27:09.394 23:23:32 -- common/autotest_common.sh@930 -- # kill -0 2950567 00:27:09.394 23:23:32 -- common/autotest_common.sh@931 -- # uname 00:27:09.394 23:23:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:09.394 23:23:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2950567 00:27:09.654 23:23:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:09.654 23:23:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:09.654 23:23:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2950567' 00:27:09.654 killing process with pid 2950567 00:27:09.654 23:23:32 -- common/autotest_common.sh@945 -- # kill 2950567 00:27:09.654 23:23:32 -- common/autotest_common.sh@950 -- # wait 2950567 00:27:09.654 23:23:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:09.654 23:23:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:09.654 23:23:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:09.654 23:23:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.655 23:23:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:09.655 23:23:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.655 23:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.655 23:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.958 23:23:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:12.958 23:23:35 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:12.958 00:27:12.958 real 0m52.390s 00:27:12.958 user 2m46.676s 00:27:12.958 sys 0m10.573s 00:27:12.958 23:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.958 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.958 ************************************ 00:27:12.958 END TEST nvmf_perf_adq 00:27:12.958 ************************************ 00:27:12.958 23:23:35 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:12.958 23:23:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:12.958 23:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.958 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.958 ************************************ 00:27:12.958 START TEST nvmf_shutdown 00:27:12.958 ************************************ 00:27:12.958 23:23:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:12.958 * Looking for test storage... 00:27:12.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.958 23:23:35 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.958 23:23:35 -- nvmf/common.sh@7 -- # uname -s 00:27:12.958 23:23:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.958 23:23:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.958 23:23:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.958 23:23:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.958 23:23:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.958 23:23:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.958 23:23:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.958 23:23:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.958 23:23:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.958 23:23:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.958 23:23:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:12.958 23:23:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:12.958 23:23:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.958 23:23:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.958 23:23:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.958 23:23:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.958 23:23:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.958 23:23:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.958 23:23:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.958 23:23:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.958 23:23:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.958 23:23:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.958 23:23:35 -- paths/export.sh@5 -- # export PATH 00:27:12.958 23:23:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.958 23:23:35 -- nvmf/common.sh@46 -- # : 0 00:27:12.958 23:23:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:12.958 23:23:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:12.958 23:23:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:12.958 23:23:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.958 23:23:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.958 23:23:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:12.958 23:23:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:12.958 23:23:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:12.958 23:23:35 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:12.958 23:23:35 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:12.958 23:23:35 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:12.958 23:23:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:12.958 23:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.958 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:27:12.958 ************************************ 00:27:12.958 START TEST nvmf_shutdown_tc1 00:27:12.958 ************************************ 00:27:12.958 23:23:35 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:12.958 23:23:35 -- target/shutdown.sh@74 -- # starttarget 00:27:12.958 23:23:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:12.958 23:23:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:12.958 23:23:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.958 23:23:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:12.958 23:23:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:12.958 23:23:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:12.958 23:23:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.958 23:23:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.958 23:23:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.958 23:23:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:12.958 23:23:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:12.958 23:23:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:12.958 23:23:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.107 23:23:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:21.107 23:23:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:21.107 23:23:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:21.107 23:23:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:21.107 23:23:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:21.107 23:23:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:21.107 23:23:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:21.107 23:23:42 -- nvmf/common.sh@294 -- # net_devs=() 00:27:21.107 23:23:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:21.107 23:23:42 -- nvmf/common.sh@295 -- # e810=() 00:27:21.107 23:23:42 -- nvmf/common.sh@295 -- # local -ga e810 00:27:21.107 23:23:42 -- nvmf/common.sh@296 -- # x722=() 00:27:21.107 23:23:42 -- nvmf/common.sh@296 -- # local -ga x722 00:27:21.107 23:23:42 -- nvmf/common.sh@297 -- # mlx=() 00:27:21.107 23:23:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:21.107 23:23:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.107 23:23:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.107 23:23:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:21.107 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:21.107 23:23:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.107 23:23:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:21.107 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:21.107 23:23:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.107 23:23:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.107 23:23:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.107 23:23:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:21.107 Found net devices under 0000:31:00.0: cvl_0_0 00:27:21.107 23:23:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.107 23:23:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.107 23:23:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.107 23:23:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:21.107 Found net devices under 0000:31:00.1: cvl_0_1 00:27:21.107 23:23:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:21.107 23:23:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:21.107 23:23:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.107 23:23:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.107 23:23:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:21.107 23:23:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.107 23:23:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.107 23:23:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:21.107 23:23:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.107 23:23:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.107 23:23:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:21.107 23:23:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:21.107 23:23:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.107 23:23:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.107 23:23:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.107 23:23:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.107 23:23:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:21.107 23:23:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.107 23:23:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.107 23:23:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.107 23:23:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:21.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:27:21.107 00:27:21.107 --- 10.0.0.2 ping statistics --- 00:27:21.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.107 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:27:21.107 23:23:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:27:21.107 00:27:21.107 --- 10.0.0.1 ping statistics --- 00:27:21.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.107 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:21.107 23:23:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.107 23:23:42 -- nvmf/common.sh@410 -- # return 0 00:27:21.107 23:23:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:21.107 23:23:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.107 23:23:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:21.107 23:23:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.107 23:23:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:21.108 23:23:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:21.108 23:23:42 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:21.108 23:23:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:21.108 23:23:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.108 23:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:21.108 23:23:42 -- nvmf/common.sh@469 -- # nvmfpid=2957414 00:27:21.108 23:23:42 -- nvmf/common.sh@470 -- # waitforlisten 2957414 00:27:21.108 23:23:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:21.108 23:23:42 -- common/autotest_common.sh@819 -- # '[' -z 2957414 ']' 00:27:21.108 23:23:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.108 23:23:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.108 23:23:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.108 23:23:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.108 23:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:21.108 [2024-06-07 23:23:43.005293] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:21.108 [2024-06-07 23:23:43.005356] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.108 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.108 [2024-06-07 23:23:43.092691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.108 [2024-06-07 23:23:43.138908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:21.108 [2024-06-07 23:23:43.139052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.108 [2024-06-07 23:23:43.139062] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.108 [2024-06-07 23:23:43.139069] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.108 [2024-06-07 23:23:43.139205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.108 [2024-06-07 23:23:43.139576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.108 [2024-06-07 23:23:43.139787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.108 [2024-06-07 23:23:43.139790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.108 23:23:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.108 23:23:43 -- common/autotest_common.sh@852 -- # return 0 00:27:21.108 23:23:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:21.108 23:23:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:21.108 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:21.369 23:23:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.369 23:23:43 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.369 23:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.369 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:21.369 [2024-06-07 23:23:43.828492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.369 23:23:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.369 23:23:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:21.369 23:23:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:21.369 23:23:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.369 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:21.369 23:23:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:21.369 23:23:43 -- target/shutdown.sh@28 -- # cat 00:27:21.369 23:23:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:21.369 23:23:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.369 23:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:21.369 Malloc1 00:27:21.369 [2024-06-07 23:23:43.932012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.369 Malloc2 00:27:21.369 Malloc3 00:27:21.369 Malloc4 00:27:21.631 Malloc5 00:27:21.631 Malloc6 00:27:21.631 Malloc7 00:27:21.631 Malloc8 00:27:21.631 Malloc9 00:27:21.631 Malloc10 00:27:21.631 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.631 23:23:44 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:21.631 23:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:21.631 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.892 23:23:44 -- target/shutdown.sh@78 -- # perfpid=2957702 00:27:21.892 23:23:44 -- target/shutdown.sh@79 -- # waitforlisten 2957702 /var/tmp/bdevperf.sock 00:27:21.892 23:23:44 -- common/autotest_common.sh@819 -- # '[' -z 2957702 ']' 00:27:21.892 23:23:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:21.892 23:23:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.892 23:23:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:21.892 23:23:44 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:21.892 23:23:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.892 23:23:44 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:21.892 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:27:21.892 23:23:44 -- nvmf/common.sh@520 -- # config=() 00:27:21.892 23:23:44 -- nvmf/common.sh@520 -- # local subsystem config 00:27:21.892 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.892 { 00:27:21.892 "params": { 00:27:21.892 "name": "Nvme$subsystem", 00:27:21.892 "trtype": "$TEST_TRANSPORT", 00:27:21.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.892 "adrfam": "ipv4", 00:27:21.892 "trsvcid": "$NVMF_PORT", 00:27:21.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.892 "hdgst": ${hdgst:-false}, 00:27:21.892 "ddgst": ${ddgst:-false} 00:27:21.892 }, 00:27:21.892 "method": "bdev_nvme_attach_controller" 00:27:21.892 } 00:27:21.892 EOF 00:27:21.892 )") 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.892 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.892 { 00:27:21.892 "params": { 00:27:21.892 "name": "Nvme$subsystem", 00:27:21.892 "trtype": "$TEST_TRANSPORT", 00:27:21.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.892 "adrfam": "ipv4", 00:27:21.892 "trsvcid": "$NVMF_PORT", 00:27:21.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.892 "hdgst": ${hdgst:-false}, 00:27:21.892 "ddgst": ${ddgst:-false} 00:27:21.892 }, 00:27:21.892 "method": "bdev_nvme_attach_controller" 00:27:21.892 } 00:27:21.892 EOF 00:27:21.892 )") 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.892 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.892 { 00:27:21.892 "params": { 00:27:21.892 "name": "Nvme$subsystem", 00:27:21.892 "trtype": "$TEST_TRANSPORT", 00:27:21.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.892 "adrfam": "ipv4", 00:27:21.892 "trsvcid": "$NVMF_PORT", 00:27:21.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.892 "hdgst": ${hdgst:-false}, 00:27:21.892 "ddgst": ${ddgst:-false} 00:27:21.892 }, 00:27:21.892 "method": "bdev_nvme_attach_controller" 00:27:21.892 } 00:27:21.892 EOF 00:27:21.892 )") 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.892 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.892 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.892 { 00:27:21.892 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 [2024-06-07 23:23:44.388695] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:21.893 [2024-06-07 23:23:44.388750] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.893 23:23:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:21.893 { 00:27:21.893 "params": { 00:27:21.893 "name": "Nvme$subsystem", 00:27:21.893 "trtype": "$TEST_TRANSPORT", 00:27:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.893 "adrfam": "ipv4", 00:27:21.893 "trsvcid": "$NVMF_PORT", 00:27:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.893 "hdgst": ${hdgst:-false}, 00:27:21.893 "ddgst": ${ddgst:-false} 00:27:21.893 }, 00:27:21.893 "method": "bdev_nvme_attach_controller" 00:27:21.893 } 00:27:21.893 EOF 00:27:21.893 )") 00:27:21.893 23:23:44 -- nvmf/common.sh@542 -- # cat 00:27:21.893 23:23:44 -- nvmf/common.sh@544 -- # jq . 00:27:21.893 23:23:44 -- nvmf/common.sh@545 -- # IFS=, 00:27:21.894 23:23:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme1", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme2", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme3", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme4", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme5", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme6", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme7", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme8", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme9", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 },{ 00:27:21.894 "params": { 00:27:21.894 "name": "Nvme10", 00:27:21.894 "trtype": "tcp", 00:27:21.894 "traddr": "10.0.0.2", 00:27:21.894 "adrfam": "ipv4", 00:27:21.894 "trsvcid": "4420", 00:27:21.894 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:21.894 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:21.894 "hdgst": false, 00:27:21.894 "ddgst": false 00:27:21.894 }, 00:27:21.894 "method": "bdev_nvme_attach_controller" 00:27:21.894 }' 00:27:21.894 [2024-06-07 23:23:44.450493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.894 [2024-06-07 23:23:44.479641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.278 23:23:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:23.279 23:23:45 -- common/autotest_common.sh@852 -- # return 0 00:27:23.279 23:23:45 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:23.279 23:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.279 23:23:45 -- common/autotest_common.sh@10 -- # set +x 00:27:23.279 23:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.279 23:23:45 -- target/shutdown.sh@83 -- # kill -9 2957702 00:27:23.279 23:23:45 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:23.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2957702 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:23.279 23:23:45 -- target/shutdown.sh@87 -- # sleep 1 00:27:24.223 23:23:46 -- target/shutdown.sh@88 -- # kill -0 2957414 00:27:24.223 23:23:46 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:24.223 23:23:46 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:24.223 23:23:46 -- nvmf/common.sh@520 -- # config=() 00:27:24.223 23:23:46 -- nvmf/common.sh@520 -- # local subsystem config 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 [2024-06-07 23:23:46.867792] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:24.223 [2024-06-07 23:23:46.867849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958182 ] 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.223 } 00:27:24.223 EOF 00:27:24.223 )") 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.223 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.223 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.223 { 00:27:24.223 "params": { 00:27:24.223 "name": "Nvme$subsystem", 00:27:24.223 "trtype": "$TEST_TRANSPORT", 00:27:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.223 "adrfam": "ipv4", 00:27:24.223 "trsvcid": "$NVMF_PORT", 00:27:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.223 "hdgst": ${hdgst:-false}, 00:27:24.223 "ddgst": ${ddgst:-false} 00:27:24.223 }, 00:27:24.223 "method": "bdev_nvme_attach_controller" 00:27:24.224 } 00:27:24.224 EOF 00:27:24.224 )") 00:27:24.224 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.224 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.224 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.224 { 00:27:24.224 "params": { 00:27:24.224 "name": "Nvme$subsystem", 00:27:24.224 "trtype": "$TEST_TRANSPORT", 00:27:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.224 "adrfam": "ipv4", 00:27:24.224 "trsvcid": "$NVMF_PORT", 00:27:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.224 "hdgst": ${hdgst:-false}, 00:27:24.224 "ddgst": ${ddgst:-false} 00:27:24.224 }, 00:27:24.224 "method": "bdev_nvme_attach_controller" 00:27:24.224 } 00:27:24.224 EOF 00:27:24.224 )") 00:27:24.224 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.224 23:23:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:24.224 23:23:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:24.224 { 00:27:24.224 "params": { 00:27:24.224 "name": "Nvme$subsystem", 00:27:24.224 "trtype": "$TEST_TRANSPORT", 00:27:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.224 "adrfam": "ipv4", 00:27:24.224 "trsvcid": "$NVMF_PORT", 00:27:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.224 "hdgst": ${hdgst:-false}, 00:27:24.224 "ddgst": ${ddgst:-false} 00:27:24.224 }, 00:27:24.224 "method": "bdev_nvme_attach_controller" 00:27:24.224 } 00:27:24.224 EOF 00:27:24.224 )") 00:27:24.224 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.224 23:23:46 -- nvmf/common.sh@542 -- # cat 00:27:24.224 23:23:46 -- nvmf/common.sh@544 -- # jq . 00:27:24.485 23:23:46 -- nvmf/common.sh@545 -- # IFS=, 00:27:24.485 23:23:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme1", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme2", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme3", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme4", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme5", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme6", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme7", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme8", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme9", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 },{ 00:27:24.485 "params": { 00:27:24.485 "name": "Nvme10", 00:27:24.485 "trtype": "tcp", 00:27:24.485 "traddr": "10.0.0.2", 00:27:24.485 "adrfam": "ipv4", 00:27:24.485 "trsvcid": "4420", 00:27:24.485 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:24.485 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:24.485 "hdgst": false, 00:27:24.485 "ddgst": false 00:27:24.485 }, 00:27:24.485 "method": "bdev_nvme_attach_controller" 00:27:24.485 }' 00:27:24.485 [2024-06-07 23:23:46.929499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.485 [2024-06-07 23:23:46.958569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.872 Running I/O for 1 seconds... 00:27:26.815 00:27:26.815 Latency(us) 00:27:26.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.815 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme1n1 : 1.07 403.98 25.25 0.00 0.00 154624.06 26869.76 149422.08 00:27:26.815 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme2n1 : 1.08 401.31 25.08 0.00 0.00 154443.86 27306.67 145053.01 00:27:26.815 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme3n1 : 1.10 438.67 27.42 0.00 0.00 142003.77 13271.04 123207.68 00:27:26.815 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme4n1 : 1.10 439.56 27.47 0.00 0.00 140486.64 14854.83 127576.75 00:27:26.815 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme5n1 : 1.14 424.93 26.56 0.00 0.00 139880.80 14417.92 116217.17 00:27:26.815 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme6n1 : 1.08 403.70 25.23 0.00 0.00 148666.26 9393.49 131072.00 00:27:26.815 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme7n1 : 1.10 437.61 27.35 0.00 0.00 138125.37 13926.40 112721.92 00:27:26.815 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme8n1 : 1.15 421.25 26.33 0.00 0.00 137490.71 14964.05 111848.11 00:27:26.815 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme9n1 : 1.11 436.46 27.28 0.00 0.00 136324.13 11031.89 118838.61 00:27:26.815 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.815 Verification LBA range: start 0x0 length 0x400 00:27:26.815 Nvme10n1 : 1.11 437.00 27.31 0.00 0.00 135352.46 5461.33 120586.24 00:27:26.815 =================================================================================================================== 00:27:26.815 Total : 4244.47 265.28 0.00 0.00 142430.83 5461.33 149422.08 00:27:27.076 23:23:49 -- target/shutdown.sh@93 -- # stoptarget 00:27:27.076 23:23:49 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:27.076 23:23:49 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:27.076 23:23:49 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:27.076 23:23:49 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:27.076 23:23:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:27.076 23:23:49 -- nvmf/common.sh@116 -- # sync 00:27:27.076 23:23:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:27.076 23:23:49 -- nvmf/common.sh@119 -- # set +e 00:27:27.076 23:23:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:27.076 23:23:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:27.076 rmmod nvme_tcp 00:27:27.076 rmmod nvme_fabrics 00:27:27.076 rmmod nvme_keyring 00:27:27.076 23:23:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:27.076 23:23:49 -- nvmf/common.sh@123 -- # set -e 00:27:27.076 23:23:49 -- nvmf/common.sh@124 -- # return 0 00:27:27.076 23:23:49 -- nvmf/common.sh@477 -- # '[' -n 2957414 ']' 00:27:27.076 23:23:49 -- nvmf/common.sh@478 -- # killprocess 2957414 00:27:27.076 23:23:49 -- common/autotest_common.sh@926 -- # '[' -z 2957414 ']' 00:27:27.076 23:23:49 -- common/autotest_common.sh@930 -- # kill -0 2957414 00:27:27.076 23:23:49 -- common/autotest_common.sh@931 -- # uname 00:27:27.076 23:23:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:27.076 23:23:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2957414 00:27:27.337 23:23:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:27.337 23:23:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:27.337 23:23:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2957414' 00:27:27.337 killing process with pid 2957414 00:27:27.337 23:23:49 -- common/autotest_common.sh@945 -- # kill 2957414 00:27:27.337 23:23:49 -- common/autotest_common.sh@950 -- # wait 2957414 00:27:27.337 23:23:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:27.337 23:23:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:27.337 23:23:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:27.337 23:23:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.337 23:23:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:27.337 23:23:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.337 23:23:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.337 23:23:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.884 23:23:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:29.884 00:27:29.884 real 0m16.530s 00:27:29.884 user 0m33.407s 00:27:29.884 sys 0m6.667s 00:27:29.884 23:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.884 23:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.884 ************************************ 00:27:29.884 END TEST nvmf_shutdown_tc1 00:27:29.884 ************************************ 00:27:29.884 23:23:52 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:29.884 23:23:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.884 23:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.884 23:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.884 ************************************ 00:27:29.884 START TEST nvmf_shutdown_tc2 00:27:29.884 ************************************ 00:27:29.884 23:23:52 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:29.884 23:23:52 -- target/shutdown.sh@98 -- # starttarget 00:27:29.884 23:23:52 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:29.884 23:23:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:29.884 23:23:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.884 23:23:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:29.884 23:23:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:29.884 23:23:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:29.884 23:23:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.884 23:23:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.884 23:23:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.884 23:23:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:29.884 23:23:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:29.884 23:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.884 23:23:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:29.884 23:23:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:29.884 23:23:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:29.884 23:23:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:29.884 23:23:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:29.884 23:23:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:29.884 23:23:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:29.884 23:23:52 -- nvmf/common.sh@294 -- # net_devs=() 00:27:29.884 23:23:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:29.884 23:23:52 -- nvmf/common.sh@295 -- # e810=() 00:27:29.884 23:23:52 -- nvmf/common.sh@295 -- # local -ga e810 00:27:29.884 23:23:52 -- nvmf/common.sh@296 -- # x722=() 00:27:29.884 23:23:52 -- nvmf/common.sh@296 -- # local -ga x722 00:27:29.884 23:23:52 -- nvmf/common.sh@297 -- # mlx=() 00:27:29.884 23:23:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:29.884 23:23:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.884 23:23:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:29.884 23:23:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:29.884 23:23:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.884 23:23:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:29.884 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:29.884 23:23:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.884 23:23:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:29.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:29.884 23:23:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.884 23:23:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.884 23:23:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.884 23:23:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:29.884 Found net devices under 0000:31:00.0: cvl_0_0 00:27:29.884 23:23:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.884 23:23:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.884 23:23:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.884 23:23:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.884 23:23:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:29.884 Found net devices under 0000:31:00.1: cvl_0_1 00:27:29.884 23:23:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.884 23:23:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:29.884 23:23:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:29.884 23:23:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:29.884 23:23:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.884 23:23:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.884 23:23:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.884 23:23:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:29.884 23:23:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.884 23:23:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.884 23:23:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:29.884 23:23:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.884 23:23:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.884 23:23:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:29.884 23:23:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:29.884 23:23:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.884 23:23:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.884 23:23:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.884 23:23:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.884 23:23:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:29.884 23:23:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.885 23:23:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.885 23:23:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.885 23:23:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:29.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:27:29.885 00:27:29.885 --- 10.0.0.2 ping statistics --- 00:27:29.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.885 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:27:29.885 23:23:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:27:29.885 00:27:29.885 --- 10.0.0.1 ping statistics --- 00:27:29.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.885 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:27:29.885 23:23:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.885 23:23:52 -- nvmf/common.sh@410 -- # return 0 00:27:29.885 23:23:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:29.885 23:23:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.885 23:23:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:29.885 23:23:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:29.885 23:23:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.885 23:23:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:29.885 23:23:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:29.885 23:23:52 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:29.885 23:23:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:29.885 23:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.885 23:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.885 23:23:52 -- nvmf/common.sh@469 -- # nvmfpid=2959308 00:27:29.885 23:23:52 -- nvmf/common.sh@470 -- # waitforlisten 2959308 00:27:29.885 23:23:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:29.885 23:23:52 -- common/autotest_common.sh@819 -- # '[' -z 2959308 ']' 00:27:29.885 23:23:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.885 23:23:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.885 23:23:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.885 23:23:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.885 23:23:52 -- common/autotest_common.sh@10 -- # set +x 00:27:29.885 [2024-06-07 23:23:52.522089] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:29.885 [2024-06-07 23:23:52.522136] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.885 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.146 [2024-06-07 23:23:52.602814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.146 [2024-06-07 23:23:52.630898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:30.146 [2024-06-07 23:23:52.630994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.146 [2024-06-07 23:23:52.631001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.146 [2024-06-07 23:23:52.631006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.146 [2024-06-07 23:23:52.631131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.146 [2024-06-07 23:23:52.631293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.146 [2024-06-07 23:23:52.631604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.146 [2024-06-07 23:23:52.631605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.716 23:23:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.716 23:23:53 -- common/autotest_common.sh@852 -- # return 0 00:27:30.716 23:23:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:30.716 23:23:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:30.716 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.716 23:23:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.716 23:23:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.716 23:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.716 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.716 [2024-06-07 23:23:53.332312] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.716 23:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.716 23:23:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:30.716 23:23:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:30.716 23:23:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:30.716 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.716 23:23:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.716 23:23:53 -- target/shutdown.sh@28 -- # cat 00:27:30.716 23:23:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:30.716 23:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.716 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:30.976 Malloc1 00:27:30.976 [2024-06-07 23:23:53.431107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.976 Malloc2 00:27:30.976 Malloc3 00:27:30.976 Malloc4 00:27:30.976 Malloc5 00:27:30.976 Malloc6 00:27:30.976 Malloc7 00:27:31.237 Malloc8 00:27:31.237 Malloc9 00:27:31.237 Malloc10 00:27:31.237 23:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:31.237 23:23:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:31.237 23:23:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:31.237 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:31.237 23:23:53 -- target/shutdown.sh@102 -- # perfpid=2959690 00:27:31.237 23:23:53 -- target/shutdown.sh@103 -- # waitforlisten 2959690 /var/tmp/bdevperf.sock 00:27:31.237 23:23:53 -- common/autotest_common.sh@819 -- # '[' -z 2959690 ']' 00:27:31.237 23:23:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.237 23:23:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:31.237 23:23:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.237 23:23:53 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:31.237 23:23:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:31.237 23:23:53 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:31.237 23:23:53 -- common/autotest_common.sh@10 -- # set +x 00:27:31.237 23:23:53 -- nvmf/common.sh@520 -- # config=() 00:27:31.237 23:23:53 -- nvmf/common.sh@520 -- # local subsystem config 00:27:31.237 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.237 { 00:27:31.237 "params": { 00:27:31.237 "name": "Nvme$subsystem", 00:27:31.237 "trtype": "$TEST_TRANSPORT", 00:27:31.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.237 "adrfam": "ipv4", 00:27:31.237 "trsvcid": "$NVMF_PORT", 00:27:31.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.237 "hdgst": ${hdgst:-false}, 00:27:31.237 "ddgst": ${ddgst:-false} 00:27:31.237 }, 00:27:31.237 "method": "bdev_nvme_attach_controller" 00:27:31.237 } 00:27:31.237 EOF 00:27:31.237 )") 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.237 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.237 { 00:27:31.237 "params": { 00:27:31.237 "name": "Nvme$subsystem", 00:27:31.237 "trtype": "$TEST_TRANSPORT", 00:27:31.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.237 "adrfam": "ipv4", 00:27:31.237 "trsvcid": "$NVMF_PORT", 00:27:31.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.237 "hdgst": ${hdgst:-false}, 00:27:31.237 "ddgst": ${ddgst:-false} 00:27:31.237 }, 00:27:31.237 "method": "bdev_nvme_attach_controller" 00:27:31.237 } 00:27:31.237 EOF 00:27:31.237 )") 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.237 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.237 { 00:27:31.237 "params": { 00:27:31.237 "name": "Nvme$subsystem", 00:27:31.237 "trtype": "$TEST_TRANSPORT", 00:27:31.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.237 "adrfam": "ipv4", 00:27:31.237 "trsvcid": "$NVMF_PORT", 00:27:31.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.237 "hdgst": ${hdgst:-false}, 00:27:31.237 "ddgst": ${ddgst:-false} 00:27:31.237 }, 00:27:31.237 "method": "bdev_nvme_attach_controller" 00:27:31.237 } 00:27:31.237 EOF 00:27:31.237 )") 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.237 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.237 { 00:27:31.237 "params": { 00:27:31.237 "name": "Nvme$subsystem", 00:27:31.237 "trtype": "$TEST_TRANSPORT", 00:27:31.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.237 "adrfam": "ipv4", 00:27:31.237 "trsvcid": "$NVMF_PORT", 00:27:31.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.237 "hdgst": ${hdgst:-false}, 00:27:31.237 "ddgst": ${ddgst:-false} 00:27:31.237 }, 00:27:31.237 "method": "bdev_nvme_attach_controller" 00:27:31.237 } 00:27:31.237 EOF 00:27:31.237 )") 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.237 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.237 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.237 { 00:27:31.237 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.238 { 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 [2024-06-07 23:23:53.873443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:31.238 [2024-06-07 23:23:53.873499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959690 ] 00:27:31.238 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.238 { 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.238 { 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.238 { 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 23:23:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:31.238 { 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme$subsystem", 00:27:31.238 "trtype": "$TEST_TRANSPORT", 00:27:31.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "$NVMF_PORT", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.238 "hdgst": ${hdgst:-false}, 00:27:31.238 "ddgst": ${ddgst:-false} 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 } 00:27:31.238 EOF 00:27:31.238 )") 00:27:31.238 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.238 23:23:53 -- nvmf/common.sh@542 -- # cat 00:27:31.238 23:23:53 -- nvmf/common.sh@544 -- # jq . 00:27:31.238 23:23:53 -- nvmf/common.sh@545 -- # IFS=, 00:27:31.238 23:23:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme1", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme2", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme3", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme4", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme5", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme6", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme7", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme8", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme9", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.238 "adrfam": "ipv4", 00:27:31.238 "trsvcid": "4420", 00:27:31.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:31.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:31.238 "hdgst": false, 00:27:31.238 "ddgst": false 00:27:31.238 }, 00:27:31.238 "method": "bdev_nvme_attach_controller" 00:27:31.238 },{ 00:27:31.238 "params": { 00:27:31.238 "name": "Nvme10", 00:27:31.238 "trtype": "tcp", 00:27:31.238 "traddr": "10.0.0.2", 00:27:31.239 "adrfam": "ipv4", 00:27:31.239 "trsvcid": "4420", 00:27:31.239 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:31.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:31.239 "hdgst": false, 00:27:31.239 "ddgst": false 00:27:31.239 }, 00:27:31.239 "method": "bdev_nvme_attach_controller" 00:27:31.239 }' 00:27:31.499 [2024-06-07 23:23:53.934541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.499 [2024-06-07 23:23:53.963729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.883 Running I/O for 10 seconds... 00:27:32.883 23:23:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:32.883 23:23:55 -- common/autotest_common.sh@852 -- # return 0 00:27:32.883 23:23:55 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.883 23:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.883 23:23:55 -- common/autotest_common.sh@10 -- # set +x 00:27:32.883 23:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.883 23:23:55 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:32.883 23:23:55 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:32.883 23:23:55 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:32.883 23:23:55 -- target/shutdown.sh@57 -- # local ret=1 00:27:32.883 23:23:55 -- target/shutdown.sh@58 -- # local i 00:27:32.883 23:23:55 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:32.883 23:23:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:32.883 23:23:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:32.883 23:23:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:32.883 23:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.883 23:23:55 -- common/autotest_common.sh@10 -- # set +x 00:27:32.883 23:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.883 23:23:55 -- target/shutdown.sh@60 -- # read_io_count=42 00:27:32.883 23:23:55 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:27:32.883 23:23:55 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:33.144 23:23:55 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:33.144 23:23:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.144 23:23:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.144 23:23:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.144 23:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.144 23:23:55 -- common/autotest_common.sh@10 -- # set +x 00:27:33.144 23:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.144 23:23:55 -- target/shutdown.sh@60 -- # read_io_count=167 00:27:33.145 23:23:55 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:27:33.145 23:23:55 -- target/shutdown.sh@64 -- # ret=0 00:27:33.145 23:23:55 -- target/shutdown.sh@65 -- # break 00:27:33.145 23:23:55 -- target/shutdown.sh@69 -- # return 0 00:27:33.145 23:23:55 -- target/shutdown.sh@109 -- # killprocess 2959690 00:27:33.145 23:23:55 -- common/autotest_common.sh@926 -- # '[' -z 2959690 ']' 00:27:33.145 23:23:55 -- common/autotest_common.sh@930 -- # kill -0 2959690 00:27:33.145 23:23:55 -- common/autotest_common.sh@931 -- # uname 00:27:33.145 23:23:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:33.145 23:23:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2959690 00:27:33.145 23:23:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:33.145 23:23:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:33.145 23:23:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2959690' 00:27:33.145 killing process with pid 2959690 00:27:33.145 23:23:55 -- common/autotest_common.sh@945 -- # kill 2959690 00:27:33.145 23:23:55 -- common/autotest_common.sh@950 -- # wait 2959690 00:27:33.406 Received shutdown signal, test time was about 0.647086 seconds 00:27:33.406 00:27:33.406 Latency(us) 00:27:33.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme1n1 : 0.60 455.06 28.44 0.00 0.00 136833.51 14964.05 150295.89 00:27:33.406 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme2n1 : 0.65 419.38 26.21 0.00 0.00 138041.85 13052.59 136314.88 00:27:33.406 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme3n1 : 0.60 459.19 28.70 0.00 0.00 132326.41 5843.63 136314.88 00:27:33.406 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme4n1 : 0.63 428.89 26.81 0.00 0.00 130948.92 15510.19 106605.23 00:27:33.406 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme5n1 : 0.60 457.09 28.57 0.00 0.00 129068.77 14636.37 107915.95 00:27:33.406 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme6n1 : 0.60 453.84 28.37 0.00 0.00 128246.36 14636.37 126702.93 00:27:33.406 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme7n1 : 0.60 451.95 28.25 0.00 0.00 126787.87 15400.96 124955.31 00:27:33.406 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme8n1 : 0.62 369.28 23.08 0.00 0.00 142984.17 12943.36 112721.92 00:27:33.406 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme9n1 : 0.62 368.72 23.05 0.00 0.00 141497.64 11468.80 115343.36 00:27:33.406 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.406 Verification LBA range: start 0x0 length 0x400 00:27:33.406 Nvme10n1 : 0.59 395.31 24.71 0.00 0.00 137813.22 3044.69 116217.17 00:27:33.406 =================================================================================================================== 00:27:33.406 Total : 4258.72 266.17 0.00 0.00 134135.80 3044.69 150295.89 00:27:33.406 23:23:55 -- target/shutdown.sh@112 -- # sleep 1 00:27:34.349 23:23:56 -- target/shutdown.sh@113 -- # kill -0 2959308 00:27:34.349 23:23:56 -- target/shutdown.sh@115 -- # stoptarget 00:27:34.349 23:23:56 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:34.349 23:23:56 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:34.349 23:23:56 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.349 23:23:56 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:34.349 23:23:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:34.349 23:23:56 -- nvmf/common.sh@116 -- # sync 00:27:34.349 23:23:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:34.349 23:23:57 -- nvmf/common.sh@119 -- # set +e 00:27:34.349 23:23:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:34.349 23:23:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:34.349 rmmod nvme_tcp 00:27:34.349 rmmod nvme_fabrics 00:27:34.610 rmmod nvme_keyring 00:27:34.610 23:23:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:34.610 23:23:57 -- nvmf/common.sh@123 -- # set -e 00:27:34.610 23:23:57 -- nvmf/common.sh@124 -- # return 0 00:27:34.610 23:23:57 -- nvmf/common.sh@477 -- # '[' -n 2959308 ']' 00:27:34.610 23:23:57 -- nvmf/common.sh@478 -- # killprocess 2959308 00:27:34.610 23:23:57 -- common/autotest_common.sh@926 -- # '[' -z 2959308 ']' 00:27:34.610 23:23:57 -- common/autotest_common.sh@930 -- # kill -0 2959308 00:27:34.610 23:23:57 -- common/autotest_common.sh@931 -- # uname 00:27:34.610 23:23:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:34.610 23:23:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2959308 00:27:34.610 23:23:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:34.610 23:23:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:34.610 23:23:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2959308' 00:27:34.610 killing process with pid 2959308 00:27:34.610 23:23:57 -- common/autotest_common.sh@945 -- # kill 2959308 00:27:34.610 23:23:57 -- common/autotest_common.sh@950 -- # wait 2959308 00:27:34.872 23:23:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:34.872 23:23:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:34.872 23:23:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:34.872 23:23:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.872 23:23:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:34.872 23:23:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.872 23:23:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.872 23:23:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.785 23:23:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:36.785 00:27:36.785 real 0m7.331s 00:27:36.785 user 0m21.249s 00:27:36.785 sys 0m1.157s 00:27:36.785 23:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.785 23:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:36.785 ************************************ 00:27:36.785 END TEST nvmf_shutdown_tc2 00:27:36.785 ************************************ 00:27:37.046 23:23:59 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:37.046 23:23:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:37.046 23:23:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:37.046 23:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.046 ************************************ 00:27:37.046 START TEST nvmf_shutdown_tc3 00:27:37.046 ************************************ 00:27:37.046 23:23:59 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:37.046 23:23:59 -- target/shutdown.sh@120 -- # starttarget 00:27:37.046 23:23:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:37.046 23:23:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:37.046 23:23:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.046 23:23:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:37.046 23:23:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:37.046 23:23:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:37.047 23:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.047 23:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.047 23:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.047 23:23:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:37.047 23:23:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:37.047 23:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.047 23:23:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:37.047 23:23:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:37.047 23:23:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:37.047 23:23:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:37.047 23:23:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:37.047 23:23:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:37.047 23:23:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:37.047 23:23:59 -- nvmf/common.sh@294 -- # net_devs=() 00:27:37.047 23:23:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:37.047 23:23:59 -- nvmf/common.sh@295 -- # e810=() 00:27:37.047 23:23:59 -- nvmf/common.sh@295 -- # local -ga e810 00:27:37.047 23:23:59 -- nvmf/common.sh@296 -- # x722=() 00:27:37.047 23:23:59 -- nvmf/common.sh@296 -- # local -ga x722 00:27:37.047 23:23:59 -- nvmf/common.sh@297 -- # mlx=() 00:27:37.047 23:23:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:37.047 23:23:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.047 23:23:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:37.047 23:23:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:37.047 23:23:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.047 23:23:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:37.047 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:37.047 23:23:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:37.047 23:23:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:37.047 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:37.047 23:23:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.047 23:23:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.047 23:23:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.047 23:23:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:37.047 Found net devices under 0000:31:00.0: cvl_0_0 00:27:37.047 23:23:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.047 23:23:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:37.047 23:23:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.047 23:23:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.047 23:23:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:37.047 Found net devices under 0000:31:00.1: cvl_0_1 00:27:37.047 23:23:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.047 23:23:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:37.047 23:23:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:37.047 23:23:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:37.047 23:23:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.047 23:23:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.047 23:23:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.047 23:23:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:37.047 23:23:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.047 23:23:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.047 23:23:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:37.047 23:23:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.047 23:23:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.047 23:23:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:37.047 23:23:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:37.047 23:23:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.047 23:23:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.047 23:23:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.047 23:23:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.047 23:23:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:37.047 23:23:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.308 23:23:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.308 23:23:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.308 23:23:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:37.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:27:37.308 00:27:37.308 --- 10.0.0.2 ping statistics --- 00:27:37.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.308 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:27:37.308 23:23:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:27:37.308 00:27:37.308 --- 10.0.0.1 ping statistics --- 00:27:37.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.308 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:27:37.308 23:23:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.308 23:23:59 -- nvmf/common.sh@410 -- # return 0 00:27:37.308 23:23:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:37.308 23:23:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.308 23:23:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:37.309 23:23:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:37.309 23:23:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.309 23:23:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:37.309 23:23:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:37.309 23:23:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:37.309 23:23:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:37.309 23:23:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:37.309 23:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.309 23:23:59 -- nvmf/common.sh@469 -- # nvmfpid=2960938 00:27:37.309 23:23:59 -- nvmf/common.sh@470 -- # waitforlisten 2960938 00:27:37.309 23:23:59 -- common/autotest_common.sh@819 -- # '[' -z 2960938 ']' 00:27:37.309 23:23:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.309 23:23:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:37.309 23:23:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.309 23:23:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:37.309 23:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:37.309 23:23:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:37.309 [2024-06-07 23:23:59.881903] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:37.309 [2024-06-07 23:23:59.881959] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.309 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.309 [2024-06-07 23:23:59.967724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.570 [2024-06-07 23:23:59.998058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.570 [2024-06-07 23:23:59.998159] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.570 [2024-06-07 23:23:59.998165] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.570 [2024-06-07 23:23:59.998170] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.570 [2024-06-07 23:23:59.998298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.570 [2024-06-07 23:23:59.998492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.570 [2024-06-07 23:23:59.998612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.570 [2024-06-07 23:23:59.998614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:38.143 23:24:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:38.143 23:24:00 -- common/autotest_common.sh@852 -- # return 0 00:27:38.143 23:24:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:38.143 23:24:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:38.143 23:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:38.143 23:24:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.143 23:24:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.143 23:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.143 23:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:38.143 [2024-06-07 23:24:00.679300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.143 23:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.143 23:24:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:38.143 23:24:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:38.143 23:24:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:38.143 23:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:38.143 23:24:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:38.143 23:24:00 -- target/shutdown.sh@28 -- # cat 00:27:38.143 23:24:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:38.143 23:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:38.143 23:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:38.143 Malloc1 00:27:38.143 [2024-06-07 23:24:00.777888] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.143 Malloc2 00:27:38.405 Malloc3 00:27:38.405 Malloc4 00:27:38.405 Malloc5 00:27:38.405 Malloc6 00:27:38.405 Malloc7 00:27:38.405 Malloc8 00:27:38.405 Malloc9 00:27:38.715 Malloc10 00:27:38.715 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:38.715 23:24:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:38.715 23:24:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:38.715 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:38.715 23:24:01 -- target/shutdown.sh@124 -- # perfpid=2961303 00:27:38.715 23:24:01 -- target/shutdown.sh@125 -- # waitforlisten 2961303 /var/tmp/bdevperf.sock 00:27:38.715 23:24:01 -- common/autotest_common.sh@819 -- # '[' -z 2961303 ']' 00:27:38.715 23:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.715 23:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.715 23:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.715 23:24:01 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:38.715 23:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.715 23:24:01 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.715 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:27:38.715 23:24:01 -- nvmf/common.sh@520 -- # config=() 00:27:38.715 23:24:01 -- nvmf/common.sh@520 -- # local subsystem config 00:27:38.715 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.715 { 00:27:38.715 "params": { 00:27:38.715 "name": "Nvme$subsystem", 00:27:38.715 "trtype": "$TEST_TRANSPORT", 00:27:38.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.715 "adrfam": "ipv4", 00:27:38.715 "trsvcid": "$NVMF_PORT", 00:27:38.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.715 "hdgst": ${hdgst:-false}, 00:27:38.715 "ddgst": ${ddgst:-false} 00:27:38.715 }, 00:27:38.715 "method": "bdev_nvme_attach_controller" 00:27:38.715 } 00:27:38.715 EOF 00:27:38.715 )") 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.715 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.715 { 00:27:38.715 "params": { 00:27:38.715 "name": "Nvme$subsystem", 00:27:38.715 "trtype": "$TEST_TRANSPORT", 00:27:38.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.715 "adrfam": "ipv4", 00:27:38.715 "trsvcid": "$NVMF_PORT", 00:27:38.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.715 "hdgst": ${hdgst:-false}, 00:27:38.715 "ddgst": ${ddgst:-false} 00:27:38.715 }, 00:27:38.715 "method": "bdev_nvme_attach_controller" 00:27:38.715 } 00:27:38.715 EOF 00:27:38.715 )") 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.715 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.715 { 00:27:38.715 "params": { 00:27:38.715 "name": "Nvme$subsystem", 00:27:38.715 "trtype": "$TEST_TRANSPORT", 00:27:38.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.715 "adrfam": "ipv4", 00:27:38.715 "trsvcid": "$NVMF_PORT", 00:27:38.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.715 "hdgst": ${hdgst:-false}, 00:27:38.715 "ddgst": ${ddgst:-false} 00:27:38.715 }, 00:27:38.715 "method": "bdev_nvme_attach_controller" 00:27:38.715 } 00:27:38.715 EOF 00:27:38.715 )") 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.715 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.715 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.715 { 00:27:38.715 "params": { 00:27:38.715 "name": "Nvme$subsystem", 00:27:38.715 "trtype": "$TEST_TRANSPORT", 00:27:38.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.715 "adrfam": "ipv4", 00:27:38.715 "trsvcid": "$NVMF_PORT", 00:27:38.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 [2024-06-07 23:24:01.219697] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:38.716 [2024-06-07 23:24:01.219752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961303 ] 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:38.716 { 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme$subsystem", 00:27:38.716 "trtype": "$TEST_TRANSPORT", 00:27:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "$NVMF_PORT", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.716 "hdgst": ${hdgst:-false}, 00:27:38.716 "ddgst": ${ddgst:-false} 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 } 00:27:38.716 EOF 00:27:38.716 )") 00:27:38.716 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.716 23:24:01 -- nvmf/common.sh@542 -- # cat 00:27:38.716 23:24:01 -- nvmf/common.sh@544 -- # jq . 00:27:38.716 23:24:01 -- nvmf/common.sh@545 -- # IFS=, 00:27:38.716 23:24:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme1", 00:27:38.716 "trtype": "tcp", 00:27:38.716 "traddr": "10.0.0.2", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "4420", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.716 "hdgst": false, 00:27:38.716 "ddgst": false 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 },{ 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme2", 00:27:38.716 "trtype": "tcp", 00:27:38.716 "traddr": "10.0.0.2", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "4420", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.716 "hdgst": false, 00:27:38.716 "ddgst": false 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 },{ 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme3", 00:27:38.716 "trtype": "tcp", 00:27:38.716 "traddr": "10.0.0.2", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "4420", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:38.716 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:38.716 "hdgst": false, 00:27:38.716 "ddgst": false 00:27:38.716 }, 00:27:38.716 "method": "bdev_nvme_attach_controller" 00:27:38.716 },{ 00:27:38.716 "params": { 00:27:38.716 "name": "Nvme4", 00:27:38.716 "trtype": "tcp", 00:27:38.716 "traddr": "10.0.0.2", 00:27:38.716 "adrfam": "ipv4", 00:27:38.716 "trsvcid": "4420", 00:27:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme5", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme6", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme7", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme8", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme9", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 },{ 00:27:38.717 "params": { 00:27:38.717 "name": "Nvme10", 00:27:38.717 "trtype": "tcp", 00:27:38.717 "traddr": "10.0.0.2", 00:27:38.717 "adrfam": "ipv4", 00:27:38.717 "trsvcid": "4420", 00:27:38.717 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:38.717 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:38.717 "hdgst": false, 00:27:38.717 "ddgst": false 00:27:38.717 }, 00:27:38.717 "method": "bdev_nvme_attach_controller" 00:27:38.717 }' 00:27:38.717 [2024-06-07 23:24:01.280949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.717 [2024-06-07 23:24:01.310155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.706 Running I/O for 10 seconds... 00:27:40.706 23:24:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:40.706 23:24:03 -- common/autotest_common.sh@852 -- # return 0 00:27:40.706 23:24:03 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:40.706 23:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.706 23:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:40.706 23:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.706 23:24:03 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.706 23:24:03 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:40.706 23:24:03 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:40.706 23:24:03 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:40.706 23:24:03 -- target/shutdown.sh@57 -- # local ret=1 00:27:40.706 23:24:03 -- target/shutdown.sh@58 -- # local i 00:27:40.706 23:24:03 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:40.706 23:24:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:40.706 23:24:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:40.706 23:24:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:40.706 23:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.706 23:24:03 -- common/autotest_common.sh@10 -- # set +x 00:27:40.706 23:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.706 23:24:03 -- target/shutdown.sh@60 -- # read_io_count=129 00:27:40.706 23:24:03 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:27:40.706 23:24:03 -- target/shutdown.sh@64 -- # ret=0 00:27:40.706 23:24:03 -- target/shutdown.sh@65 -- # break 00:27:40.706 23:24:03 -- target/shutdown.sh@69 -- # return 0 00:27:40.706 23:24:03 -- target/shutdown.sh@134 -- # killprocess 2960938 00:27:40.706 23:24:03 -- common/autotest_common.sh@926 -- # '[' -z 2960938 ']' 00:27:40.706 23:24:03 -- common/autotest_common.sh@930 -- # kill -0 2960938 00:27:40.706 23:24:03 -- common/autotest_common.sh@931 -- # uname 00:27:40.706 23:24:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.706 23:24:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2960938 00:27:40.982 23:24:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:40.982 23:24:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:40.982 23:24:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2960938' 00:27:40.982 killing process with pid 2960938 00:27:40.982 23:24:03 -- common/autotest_common.sh@945 -- # kill 2960938 00:27:40.982 [2024-06-07 23:24:03.429457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 23:24:03 -- common/autotest_common.sh@950 -- # wait 2960938 00:27:40.982 [2024-06-07 23:24:03.429503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.982 [2024-06-07 23:24:03.429592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.429790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aefc10 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.983 [2024-06-07 23:24:03.430615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.430661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af25c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.431532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af00c0 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.984 [2024-06-07 23:24:03.432601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.432834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0570 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.985 [2024-06-07 23:24:03.433526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0a00 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.433995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0eb0 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.986 [2024-06-07 23:24:03.434835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.434959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.443319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd719b0 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.443449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04510 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.443548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2ade0 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.443642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90630 is same with the state(5) to be set 00:27:40.987 [2024-06-07 23:24:03.443737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.987 [2024-06-07 23:24:03.443753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.987 [2024-06-07 23:24:03.443760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87430 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.988 [2024-06-07 23:24:03.443873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.988 [2024-06-07 23:24:03.443867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd642c0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.443998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.444058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1340 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.988 [2024-06-07 23:24:03.446471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.446585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af17f0 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.989 [2024-06-07 23:24:03.447429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1c80 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.447995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.448180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af2130 is same with the state(5) to be set 00:27:40.990 [2024-06-07 23:24:03.451180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.990 [2024-06-07 23:24:03.451358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.990 [2024-06-07 23:24:03.451365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.991 [2024-06-07 23:24:03.451916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.991 [2024-06-07 23:24:03.451922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.451932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.451939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.451948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.451955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.451964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.451973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.451982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.451989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.451998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1770 is same with the state(5) to be set 00:27:40.992 [2024-06-07 23:24:03.452326] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb1770 was disconnected and freed. reset controller. 00:27:40.992 [2024-06-07 23:24:03.452909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.452983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.452993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.992 [2024-06-07 23:24:03.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.992 [2024-06-07 23:24:03.453217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.993 [2024-06-07 23:24:03.453881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.993 [2024-06-07 23:24:03.453888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.453985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.453993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32380 is same with the state(5) to be set 00:27:40.994 [2024-06-07 23:24:03.454031] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe32380 was disconnected and freed. reset controller. 00:27:40.994 [2024-06-07 23:24:03.455655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:40.994 [2024-06-07 23:24:03.455708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8350 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.455722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd719b0 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.455748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf02650 is same with the state(5) to be set 00:27:40.994 [2024-06-07 23:24:03.455833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf04510 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.455851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ade0 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.455868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90630 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.455887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88180 is same with the state(5) to be set 00:27:40.994 [2024-06-07 23:24:03.455970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.455986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.455993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.456002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.456017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:40.994 [2024-06-07 23:24:03.456023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.456030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf102b0 is same with the state(5) to be set 00:27:40.994 [2024-06-07 23:24:03.456049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87430 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.456064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd642c0 (9): Bad file descriptor 00:27:40.994 [2024-06-07 23:24:03.457897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.457928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.457935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.457945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.457952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.457961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.457968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.457977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.457984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.457993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.994 [2024-06-07 23:24:03.458161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.994 [2024-06-07 23:24:03.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.995 [2024-06-07 23:24:03.458822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.995 [2024-06-07 23:24:03.458829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.458953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.458960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459015] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb2d50 was disconnected and freed. reset controller. 00:27:40.996 [2024-06-07 23:24:03.459085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.459311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.459318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.996 [2024-06-07 23:24:03.464410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.996 [2024-06-07 23:24:03.464417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.464987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.465003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.465010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.465019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.465026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.465035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.997 [2024-06-07 23:24:03.465042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.997 [2024-06-07 23:24:03.465408] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f5eb90 was disconnected and freed. reset controller. 00:27:40.997 [2024-06-07 23:24:03.465442] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:40.997 [2024-06-07 23:24:03.465649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.465992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.465999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.998 [2024-06-07 23:24:03.466322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.998 [2024-06-07 23:24:03.466331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.466650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.466709] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe337f0 was disconnected and freed. reset controller. 00:27:40.999 [2024-06-07 23:24:03.470546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.999 [2024-06-07 23:24:03.470975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.999 [2024-06-07 23:24:03.470988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef8350 with addr=10.0.0.2, port=4420 00:27:40.999 [2024-06-07 23:24:03.470999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef8350 is same with the state(5) to be set 00:27:40.999 [2024-06-07 23:24:03.471489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.999 [2024-06-07 23:24:03.471901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.999 [2024-06-07 23:24:03.471914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90630 with addr=10.0.0.2, port=4420 00:27:40.999 [2024-06-07 23:24:03.471924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90630 is same with the state(5) to be set 00:27:40.999 [2024-06-07 23:24:03.471952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02650 (9): Bad file descriptor 00:27:40.999 [2024-06-07 23:24:03.471967] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:40.999 [2024-06-07 23:24:03.471984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88180 (9): Bad file descriptor 00:27:40.999 [2024-06-07 23:24:03.472004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf102b0 (9): Bad file descriptor 00:27:40.999 [2024-06-07 23:24:03.472022] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:40.999 [2024-06-07 23:24:03.472090] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:40.999 [2024-06-07 23:24:03.472134] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:40.999 [2024-06-07 23:24:03.473622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.999 [2024-06-07 23:24:03.473874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:40.999 [2024-06-07 23:24:03.473884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.473985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.473992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474094] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeaebb0 was disconnected and freed. reset controller. 00:27:41.000 [2024-06-07 23:24:03.474102] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.000 [2024-06-07 23:24:03.474148] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:41.000 [2024-06-07 23:24:03.474217] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:41.000 [2024-06-07 23:24:03.474260] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:41.000 [2024-06-07 23:24:03.474273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:41.000 [2024-06-07 23:24:03.474300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8350 (9): Bad file descriptor 00:27:41.000 [2024-06-07 23:24:03.474310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90630 (9): Bad file descriptor 00:27:41.000 [2024-06-07 23:24:03.474354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.000 [2024-06-07 23:24:03.474714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.000 [2024-06-07 23:24:03.474723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.474985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.474992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.001 [2024-06-07 23:24:03.475368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.001 [2024-06-07 23:24:03.475375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.475384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.475392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.475401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.475408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.475416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe25f30 is same with the state(5) to be set 00:27:41.002 [2024-06-07 23:24:03.476658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.476987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.476997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.002 [2024-06-07 23:24:03.477316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.002 [2024-06-07 23:24:03.477325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.003 [2024-06-07 23:24:03.477760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.003 [2024-06-07 23:24:03.477769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd54660 is same with the state(5) to be set 00:27:41.003 [2024-06-07 23:24:03.480555] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:41.003 [2024-06-07 23:24:03.480578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.003 [2024-06-07 23:24:03.480587] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:41.003 [2024-06-07 23:24:03.480596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:41.003 [2024-06-07 23:24:03.480971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.481459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.481498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd87430 with addr=10.0.0.2, port=4420 00:27:41.003 [2024-06-07 23:24:03.481509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87430 is same with the state(5) to be set 00:27:41.003 [2024-06-07 23:24:03.481900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.482415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.482452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf02650 with addr=10.0.0.2, port=4420 00:27:41.003 [2024-06-07 23:24:03.482465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf02650 is same with the state(5) to be set 00:27:41.003 [2024-06-07 23:24:03.482476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:41.003 [2024-06-07 23:24:03.482485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:41.003 [2024-06-07 23:24:03.482495] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:41.003 [2024-06-07 23:24:03.482513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:41.003 [2024-06-07 23:24:03.482522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:41.003 [2024-06-07 23:24:03.482530] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:41.003 [2024-06-07 23:24:03.482548] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.003 [2024-06-07 23:24:03.482562] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.003 [2024-06-07 23:24:03.482938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.003 [2024-06-07 23:24:03.482950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.003 [2024-06-07 23:24:03.483200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.483462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.483471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf04510 with addr=10.0.0.2, port=4420 00:27:41.003 [2024-06-07 23:24:03.483479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04510 is same with the state(5) to be set 00:27:41.003 [2024-06-07 23:24:03.483852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.484246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.003 [2024-06-07 23:24:03.484256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd642c0 with addr=10.0.0.2, port=4420 00:27:41.003 [2024-06-07 23:24:03.484269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd642c0 is same with the state(5) to be set 00:27:41.003 [2024-06-07 23:24:03.484611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.004 [2024-06-07 23:24:03.484834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.004 [2024-06-07 23:24:03.484843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd719b0 with addr=10.0.0.2, port=4420 00:27:41.004 [2024-06-07 23:24:03.484850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd719b0 is same with the state(5) to be set 00:27:41.004 [2024-06-07 23:24:03.485232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.004 [2024-06-07 23:24:03.485520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.004 [2024-06-07 23:24:03.485530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2ade0 with addr=10.0.0.2, port=4420 00:27:41.004 [2024-06-07 23:24:03.485538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2ade0 is same with the state(5) to be set 00:27:41.004 [2024-06-07 23:24:03.485549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87430 (9): Bad file descriptor 00:27:41.004 [2024-06-07 23:24:03.485560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02650 (9): Bad file descriptor 00:27:41.004 [2024-06-07 23:24:03.486374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.004 [2024-06-07 23:24:03.486964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.004 [2024-06-07 23:24:03.486971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.486980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.486987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.486996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.487431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.487440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0190 is same with the state(5) to be set 00:27:41.005 [2024-06-07 23:24:03.488707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.488734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.488754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.488774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.488794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.005 [2024-06-07 23:24:03.488813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.005 [2024-06-07 23:24:03.488820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.488987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.488996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.006 [2024-06-07 23:24:03.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.006 [2024-06-07 23:24:03.489485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.007 [2024-06-07 23:24:03.489780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.007 [2024-06-07 23:24:03.489788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbc020 is same with the state(5) to be set 00:27:41.007 [2024-06-07 23:24:03.491961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:41.007 task offset: 28544 on job bdev=Nvme7n1 fails 00:27:41.007 00:27:41.007 Latency(us) 00:27:41.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme1n1 ended in about 0.54 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme1n1 : 0.54 304.66 19.04 118.89 0.00 149692.27 65972.91 159907.84 00:27:41.007 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme2n1 ended in about 0.54 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme2n1 : 0.54 303.34 18.96 118.38 0.00 148084.53 85633.71 134567.25 00:27:41.007 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme3n1 ended in about 0.52 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme3n1 : 0.52 396.84 24.80 123.29 0.00 118034.08 5543.25 138936.32 00:27:41.007 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme4n1 ended in about 0.54 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme4n1 : 0.54 313.84 19.61 112.09 0.00 141857.65 14854.83 141557.76 00:27:41.007 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme5n1 ended in about 0.54 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme5n1 : 0.54 378.43 23.65 42.46 0.00 139620.42 11851.09 137188.69 00:27:41.007 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme6n1 ended in about 0.55 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme6n1 : 0.55 298.01 18.63 116.30 0.00 141882.57 78643.20 124955.31 00:27:41.007 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme7n1 ended in about 0.52 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme7n1 : 0.52 398.38 24.90 123.77 0.00 110004.60 18786.99 104857.60 00:27:41.007 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme8n1 ended in about 0.53 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme8n1 : 0.53 391.93 24.50 120.59 0.00 110529.56 24903.68 99614.72 00:27:41.007 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme9n1 ended in about 0.55 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme9n1 : 0.55 296.76 18.55 115.81 0.00 135733.30 71652.69 114469.55 00:27:41.007 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:41.007 Job: Nvme10n1 ended in about 0.53 seconds with error 00:27:41.007 Verification LBA range: start 0x0 length 0x400 00:27:41.007 Nvme10n1 : 0.53 308.32 19.27 120.32 0.00 127705.85 10649.60 115343.36 00:27:41.007 =================================================================================================================== 00:27:41.007 Total : 3390.51 211.91 1111.89 0.00 131278.22 5543.25 159907.84 00:27:41.007 [2024-06-07 23:24:03.519109] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:41.007 [2024-06-07 23:24:03.519145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:41.007 [2024-06-07 23:24:03.519186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf04510 (9): Bad file descriptor 00:27:41.007 [2024-06-07 23:24:03.519199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd642c0 (9): Bad file descriptor 00:27:41.007 [2024-06-07 23:24:03.519208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd719b0 (9): Bad file descriptor 00:27:41.007 [2024-06-07 23:24:03.519218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ade0 (9): Bad file descriptor 00:27:41.007 [2024-06-07 23:24:03.519226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:41.007 [2024-06-07 23:24:03.519233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:41.007 [2024-06-07 23:24:03.519248] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:41.007 [2024-06-07 23:24:03.519262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:41.007 [2024-06-07 23:24:03.519269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:41.007 [2024-06-07 23:24:03.519275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:41.007 [2024-06-07 23:24:03.519290] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519302] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519350] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519361] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519373] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519383] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.007 [2024-06-07 23:24:03.519480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.519489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.519796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.520078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.520087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd88180 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.520097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88180 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.520305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.520634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.520644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf102b0 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.520651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf102b0 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.520658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.520664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.520671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:41.008 [2024-06-07 23:24:03.520681] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.520687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.520694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.008 [2024-06-07 23:24:03.520707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.520713] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.520719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:41.008 [2024-06-07 23:24:03.520730] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.520736] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.520742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:41.008 [2024-06-07 23:24:03.520772] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.520784] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.520794] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.520803] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.520813] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.520826] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:41.008 [2024-06-07 23:24:03.521377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:41.008 [2024-06-07 23:24:03.521388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:41.008 [2024-06-07 23:24:03.521408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.521415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.521421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.521427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.521448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88180 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.521459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf102b0 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.521506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:41.008 [2024-06-07 23:24:03.521915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.522125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.522134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90630 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.522141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90630 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.522482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.522878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.522887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef8350 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.522894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef8350 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.522901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.522907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.522913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:41.008 [2024-06-07 23:24:03.522923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.522929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.522935] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:41.008 [2024-06-07 23:24:03.522968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:41.008 [2024-06-07 23:24:03.522984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.522991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.523383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.523736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.523746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf02650 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.523753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf02650 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.523762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90630 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.523774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8350 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.524161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.524389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.008 [2024-06-07 23:24:03.524399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd87430 with addr=10.0.0.2, port=4420 00:27:41.008 [2024-06-07 23:24:03.524406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87430 is same with the state(5) to be set 00:27:41.008 [2024-06-07 23:24:03.524414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf02650 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.524422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.524428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.524434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:41.008 [2024-06-07 23:24:03.524444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.524450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.524456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:41.008 [2024-06-07 23:24:03.524494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.524502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.524509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd87430 (9): Bad file descriptor 00:27:41.008 [2024-06-07 23:24:03.524517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.524523] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.524529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:41.008 [2024-06-07 23:24:03.524557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.008 [2024-06-07 23:24:03.524564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:41.008 [2024-06-07 23:24:03.524570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:41.008 [2024-06-07 23:24:03.524576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:41.008 [2024-06-07 23:24:03.524601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.269 23:24:03 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:41.269 23:24:03 -- target/shutdown.sh@138 -- # sleep 1 00:27:42.211 23:24:04 -- target/shutdown.sh@141 -- # kill -9 2961303 00:27:42.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2961303) - No such process 00:27:42.211 23:24:04 -- target/shutdown.sh@141 -- # true 00:27:42.211 23:24:04 -- target/shutdown.sh@143 -- # stoptarget 00:27:42.211 23:24:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:42.211 23:24:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:42.211 23:24:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.211 23:24:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:42.211 23:24:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:42.211 23:24:04 -- nvmf/common.sh@116 -- # sync 00:27:42.211 23:24:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:42.211 23:24:04 -- nvmf/common.sh@119 -- # set +e 00:27:42.211 23:24:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:42.211 23:24:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:42.211 rmmod nvme_tcp 00:27:42.211 rmmod nvme_fabrics 00:27:42.211 rmmod nvme_keyring 00:27:42.211 23:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:42.211 23:24:04 -- nvmf/common.sh@123 -- # set -e 00:27:42.211 23:24:04 -- nvmf/common.sh@124 -- # return 0 00:27:42.211 23:24:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:42.211 23:24:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:42.211 23:24:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:42.211 23:24:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:42.211 23:24:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.211 23:24:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:42.211 23:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.211 23:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.211 23:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.758 23:24:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:44.758 00:27:44.758 real 0m7.354s 00:27:44.758 user 0m17.425s 00:27:44.758 sys 0m1.144s 00:27:44.758 23:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.758 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.758 ************************************ 00:27:44.758 END TEST nvmf_shutdown_tc3 00:27:44.758 ************************************ 00:27:44.758 23:24:06 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:44.758 00:27:44.758 real 0m31.496s 00:27:44.758 user 1m12.183s 00:27:44.758 sys 0m9.176s 00:27:44.758 23:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.758 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.758 ************************************ 00:27:44.758 END TEST nvmf_shutdown 00:27:44.758 ************************************ 00:27:44.758 23:24:06 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:27:44.758 23:24:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:44.758 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.758 23:24:06 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:27:44.758 23:24:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:44.758 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.758 23:24:06 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:27:44.758 23:24:06 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:44.758 23:24:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:44.758 23:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.758 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:44.758 ************************************ 00:27:44.758 START TEST nvmf_multicontroller 00:27:44.758 ************************************ 00:27:44.758 23:24:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:44.758 * Looking for test storage... 00:27:44.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.758 23:24:07 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.758 23:24:07 -- nvmf/common.sh@7 -- # uname -s 00:27:44.758 23:24:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.758 23:24:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.758 23:24:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.758 23:24:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.758 23:24:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.758 23:24:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.758 23:24:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.758 23:24:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.758 23:24:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.758 23:24:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.758 23:24:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:44.758 23:24:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:44.758 23:24:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.758 23:24:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.758 23:24:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.758 23:24:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.758 23:24:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.758 23:24:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.758 23:24:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.758 23:24:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.758 23:24:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.758 23:24:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.758 23:24:07 -- paths/export.sh@5 -- # export PATH 00:27:44.758 23:24:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.758 23:24:07 -- nvmf/common.sh@46 -- # : 0 00:27:44.758 23:24:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:44.758 23:24:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:44.758 23:24:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:44.758 23:24:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.758 23:24:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.758 23:24:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:44.758 23:24:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:44.758 23:24:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:44.758 23:24:07 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:44.758 23:24:07 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:44.758 23:24:07 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:44.758 23:24:07 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:44.758 23:24:07 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:44.758 23:24:07 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:44.758 23:24:07 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:44.758 23:24:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:44.758 23:24:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.758 23:24:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:44.758 23:24:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:44.758 23:24:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:44.758 23:24:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.758 23:24:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.758 23:24:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.758 23:24:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:44.758 23:24:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:44.758 23:24:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:44.758 23:24:07 -- common/autotest_common.sh@10 -- # set +x 00:27:51.349 23:24:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:51.349 23:24:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:51.349 23:24:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:51.349 23:24:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:51.349 23:24:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:51.349 23:24:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:51.349 23:24:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:51.349 23:24:14 -- nvmf/common.sh@294 -- # net_devs=() 00:27:51.349 23:24:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:51.349 23:24:14 -- nvmf/common.sh@295 -- # e810=() 00:27:51.349 23:24:14 -- nvmf/common.sh@295 -- # local -ga e810 00:27:51.349 23:24:14 -- nvmf/common.sh@296 -- # x722=() 00:27:51.349 23:24:14 -- nvmf/common.sh@296 -- # local -ga x722 00:27:51.349 23:24:14 -- nvmf/common.sh@297 -- # mlx=() 00:27:51.349 23:24:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:51.349 23:24:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.349 23:24:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:51.349 23:24:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:51.349 23:24:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:51.349 23:24:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:51.349 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:51.349 23:24:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:51.349 23:24:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:51.349 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:51.349 23:24:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:51.349 23:24:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.349 23:24:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.349 23:24:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:51.349 Found net devices under 0000:31:00.0: cvl_0_0 00:27:51.349 23:24:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.349 23:24:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:51.349 23:24:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.349 23:24:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.349 23:24:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:51.349 Found net devices under 0000:31:00.1: cvl_0_1 00:27:51.349 23:24:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.349 23:24:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:51.349 23:24:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:51.349 23:24:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:51.349 23:24:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.349 23:24:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.349 23:24:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.349 23:24:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:51.349 23:24:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.349 23:24:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.349 23:24:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:51.349 23:24:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.349 23:24:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.349 23:24:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:51.349 23:24:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:51.349 23:24:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.610 23:24:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.610 23:24:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.610 23:24:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.610 23:24:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:51.610 23:24:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.871 23:24:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.871 23:24:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.871 23:24:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:51.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.766 ms 00:27:51.871 00:27:51.871 --- 10.0.0.2 ping statistics --- 00:27:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.871 rtt min/avg/max/mdev = 0.766/0.766/0.766/0.000 ms 00:27:51.871 23:24:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:27:51.871 00:27:51.871 --- 10.0.0.1 ping statistics --- 00:27:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.871 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:27:51.871 23:24:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.871 23:24:14 -- nvmf/common.sh@410 -- # return 0 00:27:51.871 23:24:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:51.871 23:24:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.871 23:24:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:51.871 23:24:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:51.871 23:24:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.871 23:24:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:51.871 23:24:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:51.871 23:24:14 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:51.871 23:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:51.871 23:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:51.871 23:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.871 23:24:14 -- nvmf/common.sh@469 -- # nvmfpid=2966785 00:27:51.871 23:24:14 -- nvmf/common.sh@470 -- # waitforlisten 2966785 00:27:51.871 23:24:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:51.871 23:24:14 -- common/autotest_common.sh@819 -- # '[' -z 2966785 ']' 00:27:51.871 23:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.871 23:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:51.871 23:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.871 23:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:51.871 23:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:51.871 [2024-06-07 23:24:14.433583] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:51.871 [2024-06-07 23:24:14.433650] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.871 [2024-06-07 23:24:14.522444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:52.132 [2024-06-07 23:24:14.567828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:52.132 [2024-06-07 23:24:14.567983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.132 [2024-06-07 23:24:14.567994] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.132 [2024-06-07 23:24:14.568003] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.132 [2024-06-07 23:24:14.568136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.132 [2024-06-07 23:24:14.568303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.132 [2024-06-07 23:24:14.568320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.704 23:24:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:52.704 23:24:15 -- common/autotest_common.sh@852 -- # return 0 00:27:52.704 23:24:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:52.704 23:24:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 23:24:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.704 23:24:15 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 [2024-06-07 23:24:15.255780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 Malloc0 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 [2024-06-07 23:24:15.320590] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 [2024-06-07 23:24:15.332554] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 Malloc1 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.704 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.704 23:24:15 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:52.704 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.704 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.966 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.966 23:24:15 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:52.966 23:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.966 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:52.966 23:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.966 23:24:15 -- host/multicontroller.sh@44 -- # bdevperf_pid=2966964 00:27:52.966 23:24:15 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.966 23:24:15 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:52.966 23:24:15 -- host/multicontroller.sh@47 -- # waitforlisten 2966964 /var/tmp/bdevperf.sock 00:27:52.966 23:24:15 -- common/autotest_common.sh@819 -- # '[' -z 2966964 ']' 00:27:52.966 23:24:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.966 23:24:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:52.966 23:24:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.966 23:24:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:52.966 23:24:15 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 23:24:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:53.909 23:24:16 -- common/autotest_common.sh@852 -- # return 0 00:27:53.909 23:24:16 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:53.909 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 NVMe0n1 00:27:53.909 23:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.909 23:24:16 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.909 23:24:16 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:53.909 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 23:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.909 1 00:27:53.909 23:24:16 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:53.909 23:24:16 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.909 23:24:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:53.909 23:24:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:53.909 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 request: 00:27:53.909 { 00:27:53.909 "name": "NVMe0", 00:27:53.909 "trtype": "tcp", 00:27:53.909 "traddr": "10.0.0.2", 00:27:53.909 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:53.909 "hostaddr": "10.0.0.2", 00:27:53.909 "hostsvcid": "60000", 00:27:53.909 "adrfam": "ipv4", 00:27:53.909 "trsvcid": "4420", 00:27:53.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.909 "method": "bdev_nvme_attach_controller", 00:27:53.909 "req_id": 1 00:27:53.909 } 00:27:53.909 Got JSON-RPC error response 00:27:53.909 response: 00:27:53.909 { 00:27:53.909 "code": -114, 00:27:53.909 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:53.909 } 00:27:53.909 23:24:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # es=1 00:27:53.909 23:24:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.909 23:24:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.909 23:24:16 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:53.909 23:24:16 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.909 23:24:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:53.909 23:24:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:53.909 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 request: 00:27:53.909 { 00:27:53.909 "name": "NVMe0", 00:27:53.909 "trtype": "tcp", 00:27:53.909 "traddr": "10.0.0.2", 00:27:53.909 "hostaddr": "10.0.0.2", 00:27:53.909 "hostsvcid": "60000", 00:27:53.909 "adrfam": "ipv4", 00:27:53.909 "trsvcid": "4420", 00:27:53.909 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.909 "method": "bdev_nvme_attach_controller", 00:27:53.909 "req_id": 1 00:27:53.909 } 00:27:53.909 Got JSON-RPC error response 00:27:53.909 response: 00:27:53.909 { 00:27:53.909 "code": -114, 00:27:53.909 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:53.909 } 00:27:53.909 23:24:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # es=1 00:27:53.909 23:24:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.909 23:24:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.909 23:24:16 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.909 23:24:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.909 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 request: 00:27:53.909 { 00:27:53.909 "name": "NVMe0", 00:27:53.909 "trtype": "tcp", 00:27:53.909 "traddr": "10.0.0.2", 00:27:53.909 "hostaddr": "10.0.0.2", 00:27:53.909 "hostsvcid": "60000", 00:27:53.909 "adrfam": "ipv4", 00:27:53.909 "trsvcid": "4420", 00:27:53.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.909 "multipath": "disable", 00:27:53.909 "method": "bdev_nvme_attach_controller", 00:27:53.909 "req_id": 1 00:27:53.909 } 00:27:53.909 Got JSON-RPC error response 00:27:53.909 response: 00:27:53.909 { 00:27:53.909 "code": -114, 00:27:53.909 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:53.909 } 00:27:53.909 23:24:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # es=1 00:27:53.909 23:24:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.909 23:24:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.909 23:24:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.909 23:24:16 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:53.909 23:24:16 -- common/autotest_common.sh@640 -- # local es=0 00:27:53.909 23:24:16 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:53.909 23:24:16 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:53.909 23:24:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:53.909 23:24:16 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:53.910 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.910 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:53.910 request: 00:27:53.910 { 00:27:53.910 "name": "NVMe0", 00:27:53.910 "trtype": "tcp", 00:27:53.910 "traddr": "10.0.0.2", 00:27:53.910 "hostaddr": "10.0.0.2", 00:27:53.910 "hostsvcid": "60000", 00:27:53.910 "adrfam": "ipv4", 00:27:53.910 "trsvcid": "4420", 00:27:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.910 "multipath": "failover", 00:27:53.910 "method": "bdev_nvme_attach_controller", 00:27:53.910 "req_id": 1 00:27:53.910 } 00:27:53.910 Got JSON-RPC error response 00:27:53.910 response: 00:27:53.910 { 00:27:53.910 "code": -114, 00:27:53.910 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:53.910 } 00:27:53.910 23:24:16 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:53.910 23:24:16 -- common/autotest_common.sh@643 -- # es=1 00:27:53.910 23:24:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:53.910 23:24:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:53.910 23:24:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:53.910 23:24:16 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.910 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.910 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:54.170 00:27:54.170 23:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.170 23:24:16 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.170 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.170 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:54.170 23:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.170 23:24:16 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:54.170 23:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.170 23:24:16 -- common/autotest_common.sh@10 -- # set +x 00:27:54.431 00:27:54.431 23:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.431 23:24:16 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.431 23:24:16 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:54.431 23:24:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.431 23:24:17 -- common/autotest_common.sh@10 -- # set +x 00:27:54.431 23:24:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.431 23:24:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:54.431 23:24:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.817 0 00:27:55.817 23:24:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:55.817 23:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.817 23:24:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.817 23:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.817 23:24:18 -- host/multicontroller.sh@100 -- # killprocess 2966964 00:27:55.817 23:24:18 -- common/autotest_common.sh@926 -- # '[' -z 2966964 ']' 00:27:55.817 23:24:18 -- common/autotest_common.sh@930 -- # kill -0 2966964 00:27:55.817 23:24:18 -- common/autotest_common.sh@931 -- # uname 00:27:55.817 23:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.817 23:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2966964 00:27:55.817 23:24:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:55.817 23:24:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:55.817 23:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2966964' 00:27:55.817 killing process with pid 2966964 00:27:55.817 23:24:18 -- common/autotest_common.sh@945 -- # kill 2966964 00:27:55.817 23:24:18 -- common/autotest_common.sh@950 -- # wait 2966964 00:27:55.817 23:24:18 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.817 23:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.817 23:24:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.817 23:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.817 23:24:18 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.817 23:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.817 23:24:18 -- common/autotest_common.sh@10 -- # set +x 00:27:55.817 23:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.817 23:24:18 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:55.817 23:24:18 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.817 23:24:18 -- common/autotest_common.sh@1597 -- # read -r file 00:27:55.817 23:24:18 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:55.817 23:24:18 -- common/autotest_common.sh@1596 -- # sort -u 00:27:55.817 23:24:18 -- common/autotest_common.sh@1598 -- # cat 00:27:55.817 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.817 [2024-06-07 23:24:15.450950] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:55.817 [2024-06-07 23:24:15.451007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966964 ] 00:27:55.817 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.817 [2024-06-07 23:24:15.510788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.817 [2024-06-07 23:24:15.539672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.817 [2024-06-07 23:24:16.993228] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 4ed84c4b-10a4-4761-b4d8-af5168904da3 already exists 00:27:55.817 [2024-06-07 23:24:16.993262] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:4ed84c4b-10a4-4761-b4d8-af5168904da3 alias for bdev NVMe1n1 00:27:55.817 [2024-06-07 23:24:16.993273] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:55.817 Running I/O for 1 seconds... 00:27:55.817 00:27:55.817 Latency(us) 00:27:55.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.817 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:55.817 NVMe0n1 : 1.00 27012.24 105.52 0.00 0.00 4727.92 3741.01 13489.49 00:27:55.817 =================================================================================================================== 00:27:55.817 Total : 27012.24 105.52 0.00 0.00 4727.92 3741.01 13489.49 00:27:55.817 Received shutdown signal, test time was about 1.000000 seconds 00:27:55.817 00:27:55.817 Latency(us) 00:27:55.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.817 =================================================================================================================== 00:27:55.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.817 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:55.817 23:24:18 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.817 23:24:18 -- common/autotest_common.sh@1597 -- # read -r file 00:27:55.817 23:24:18 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:55.817 23:24:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.817 23:24:18 -- nvmf/common.sh@116 -- # sync 00:27:55.817 23:24:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:55.817 23:24:18 -- nvmf/common.sh@119 -- # set +e 00:27:55.817 23:24:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.817 23:24:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:55.817 rmmod nvme_tcp 00:27:55.817 rmmod nvme_fabrics 00:27:55.817 rmmod nvme_keyring 00:27:55.818 23:24:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.818 23:24:18 -- nvmf/common.sh@123 -- # set -e 00:27:55.818 23:24:18 -- nvmf/common.sh@124 -- # return 0 00:27:55.818 23:24:18 -- nvmf/common.sh@477 -- # '[' -n 2966785 ']' 00:27:55.818 23:24:18 -- nvmf/common.sh@478 -- # killprocess 2966785 00:27:55.818 23:24:18 -- common/autotest_common.sh@926 -- # '[' -z 2966785 ']' 00:27:55.818 23:24:18 -- common/autotest_common.sh@930 -- # kill -0 2966785 00:27:55.818 23:24:18 -- common/autotest_common.sh@931 -- # uname 00:27:55.818 23:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.818 23:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2966785 00:27:55.818 23:24:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:55.818 23:24:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:55.818 23:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2966785' 00:27:55.818 killing process with pid 2966785 00:27:55.818 23:24:18 -- common/autotest_common.sh@945 -- # kill 2966785 00:27:55.818 23:24:18 -- common/autotest_common.sh@950 -- # wait 2966785 00:27:56.079 23:24:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:56.079 23:24:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:56.079 23:24:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:56.079 23:24:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.079 23:24:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:56.079 23:24:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.079 23:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.079 23:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.626 23:24:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:58.626 00:27:58.626 real 0m13.701s 00:27:58.626 user 0m17.405s 00:27:58.626 sys 0m6.062s 00:27:58.626 23:24:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:58.626 23:24:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.626 ************************************ 00:27:58.626 END TEST nvmf_multicontroller 00:27:58.626 ************************************ 00:27:58.626 23:24:20 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.626 23:24:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:58.626 23:24:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:58.626 23:24:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.626 ************************************ 00:27:58.626 START TEST nvmf_aer 00:27:58.626 ************************************ 00:27:58.626 23:24:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.626 * Looking for test storage... 00:27:58.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.626 23:24:20 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.626 23:24:20 -- nvmf/common.sh@7 -- # uname -s 00:27:58.626 23:24:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.626 23:24:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.626 23:24:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.626 23:24:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.626 23:24:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.626 23:24:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.626 23:24:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.626 23:24:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.626 23:24:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.626 23:24:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.626 23:24:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.626 23:24:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.626 23:24:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.626 23:24:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.626 23:24:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.626 23:24:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.626 23:24:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.626 23:24:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.626 23:24:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.626 23:24:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.626 23:24:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.626 23:24:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.626 23:24:20 -- paths/export.sh@5 -- # export PATH 00:27:58.626 23:24:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.626 23:24:20 -- nvmf/common.sh@46 -- # : 0 00:27:58.626 23:24:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:58.626 23:24:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:58.626 23:24:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:58.626 23:24:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.626 23:24:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.626 23:24:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:58.626 23:24:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:58.626 23:24:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:58.626 23:24:20 -- host/aer.sh@11 -- # nvmftestinit 00:27:58.626 23:24:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:58.626 23:24:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.626 23:24:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:58.626 23:24:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:58.626 23:24:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:58.626 23:24:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.626 23:24:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.626 23:24:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.626 23:24:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:58.626 23:24:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:58.626 23:24:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:58.626 23:24:20 -- common/autotest_common.sh@10 -- # set +x 00:28:05.214 23:24:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:05.214 23:24:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:05.214 23:24:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:05.214 23:24:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:05.214 23:24:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:05.214 23:24:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:05.214 23:24:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:05.214 23:24:27 -- nvmf/common.sh@294 -- # net_devs=() 00:28:05.214 23:24:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:05.214 23:24:27 -- nvmf/common.sh@295 -- # e810=() 00:28:05.214 23:24:27 -- nvmf/common.sh@295 -- # local -ga e810 00:28:05.214 23:24:27 -- nvmf/common.sh@296 -- # x722=() 00:28:05.214 23:24:27 -- nvmf/common.sh@296 -- # local -ga x722 00:28:05.214 23:24:27 -- nvmf/common.sh@297 -- # mlx=() 00:28:05.214 23:24:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:05.214 23:24:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.214 23:24:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:05.214 23:24:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:05.214 23:24:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:05.214 23:24:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.214 23:24:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:05.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:05.214 23:24:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.214 23:24:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:05.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:05.214 23:24:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:05.214 23:24:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:05.214 23:24:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.214 23:24:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.214 23:24:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.214 23:24:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.215 23:24:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:05.215 Found net devices under 0000:31:00.0: cvl_0_0 00:28:05.215 23:24:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.215 23:24:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.215 23:24:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.215 23:24:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.215 23:24:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.215 23:24:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:05.215 Found net devices under 0000:31:00.1: cvl_0_1 00:28:05.215 23:24:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.215 23:24:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:05.215 23:24:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:05.215 23:24:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:05.215 23:24:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:05.215 23:24:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:05.215 23:24:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.215 23:24:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.215 23:24:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.215 23:24:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:05.215 23:24:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.215 23:24:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.215 23:24:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:05.215 23:24:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.215 23:24:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.215 23:24:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:05.215 23:24:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:05.215 23:24:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.215 23:24:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.215 23:24:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.215 23:24:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.215 23:24:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:05.215 23:24:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.215 23:24:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.215 23:24:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.215 23:24:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:05.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:28:05.215 00:28:05.215 --- 10.0.0.2 ping statistics --- 00:28:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.215 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:28:05.215 23:24:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:28:05.215 00:28:05.215 --- 10.0.0.1 ping statistics --- 00:28:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.215 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:28:05.215 23:24:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.215 23:24:27 -- nvmf/common.sh@410 -- # return 0 00:28:05.215 23:24:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:05.215 23:24:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.215 23:24:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:05.215 23:24:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:05.215 23:24:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.215 23:24:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:05.215 23:24:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:05.215 23:24:27 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:05.215 23:24:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:05.215 23:24:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:05.215 23:24:27 -- common/autotest_common.sh@10 -- # set +x 00:28:05.215 23:24:27 -- nvmf/common.sh@469 -- # nvmfpid=2971727 00:28:05.215 23:24:27 -- nvmf/common.sh@470 -- # waitforlisten 2971727 00:28:05.215 23:24:27 -- common/autotest_common.sh@819 -- # '[' -z 2971727 ']' 00:28:05.215 23:24:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.215 23:24:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:05.215 23:24:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.215 23:24:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:05.215 23:24:27 -- common/autotest_common.sh@10 -- # set +x 00:28:05.215 23:24:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.215 [2024-06-07 23:24:27.870127] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:05.215 [2024-06-07 23:24:27.870186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.476 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.476 [2024-06-07 23:24:27.941618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.476 [2024-06-07 23:24:27.980503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:05.476 [2024-06-07 23:24:27.980637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.476 [2024-06-07 23:24:27.980646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.476 [2024-06-07 23:24:27.980654] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.476 [2024-06-07 23:24:27.980794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.476 [2024-06-07 23:24:27.980913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.476 [2024-06-07 23:24:27.981136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.476 [2024-06-07 23:24:27.981136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.049 23:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:06.049 23:24:28 -- common/autotest_common.sh@852 -- # return 0 00:28:06.049 23:24:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:06.049 23:24:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:06.049 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.049 23:24:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.049 23:24:28 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.049 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.049 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.049 [2024-06-07 23:24:28.685488] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.049 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.049 23:24:28 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:06.049 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.049 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.049 Malloc0 00:28:06.049 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.050 23:24:28 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:06.050 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.050 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.050 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.050 23:24:28 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.050 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.050 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.311 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.311 23:24:28 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.311 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.311 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.311 [2024-06-07 23:24:28.744836] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.311 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.311 23:24:28 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:06.311 23:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.311 23:24:28 -- common/autotest_common.sh@10 -- # set +x 00:28:06.311 [2024-06-07 23:24:28.756647] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:06.311 [ 00:28:06.311 { 00:28:06.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.311 "subtype": "Discovery", 00:28:06.311 "listen_addresses": [], 00:28:06.311 "allow_any_host": true, 00:28:06.311 "hosts": [] 00:28:06.311 }, 00:28:06.311 { 00:28:06.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.311 "subtype": "NVMe", 00:28:06.311 "listen_addresses": [ 00:28:06.311 { 00:28:06.311 "transport": "TCP", 00:28:06.311 "trtype": "TCP", 00:28:06.311 "adrfam": "IPv4", 00:28:06.311 "traddr": "10.0.0.2", 00:28:06.311 "trsvcid": "4420" 00:28:06.311 } 00:28:06.311 ], 00:28:06.311 "allow_any_host": true, 00:28:06.311 "hosts": [], 00:28:06.311 "serial_number": "SPDK00000000000001", 00:28:06.311 "model_number": "SPDK bdev Controller", 00:28:06.311 "max_namespaces": 2, 00:28:06.311 "min_cntlid": 1, 00:28:06.311 "max_cntlid": 65519, 00:28:06.311 "namespaces": [ 00:28:06.311 { 00:28:06.311 "nsid": 1, 00:28:06.311 "bdev_name": "Malloc0", 00:28:06.311 "name": "Malloc0", 00:28:06.311 "nguid": "E8BB5993EDFD4A1CB8F9F677D44F603F", 00:28:06.311 "uuid": "e8bb5993-edfd-4a1c-b8f9-f677d44f603f" 00:28:06.311 } 00:28:06.311 ] 00:28:06.311 } 00:28:06.311 ] 00:28:06.311 23:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.311 23:24:28 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:06.311 23:24:28 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:06.311 23:24:28 -- host/aer.sh@33 -- # aerpid=2971975 00:28:06.311 23:24:28 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:06.311 23:24:28 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:06.311 23:24:28 -- common/autotest_common.sh@1244 -- # local i=0 00:28:06.311 23:24:28 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1247 -- # i=1 00:28:06.311 23:24:28 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:06.311 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.311 23:24:28 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1247 -- # i=2 00:28:06.311 23:24:28 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:06.311 23:24:28 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:28:06.311 23:24:28 -- common/autotest_common.sh@1247 -- # i=3 00:28:06.311 23:24:28 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:06.570 23:24:29 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.570 23:24:29 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.570 23:24:29 -- common/autotest_common.sh@1255 -- # return 0 00:28:06.570 23:24:29 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:06.570 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.570 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.570 Malloc1 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:06.571 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.571 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:06.571 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.571 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 Asynchronous Event Request test 00:28:06.571 Attaching to 10.0.0.2 00:28:06.571 Attached to 10.0.0.2 00:28:06.571 Registering asynchronous event callbacks... 00:28:06.571 Starting namespace attribute notice tests for all controllers... 00:28:06.571 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:06.571 aer_cb - Changed Namespace 00:28:06.571 Cleaning up... 00:28:06.571 [ 00:28:06.571 { 00:28:06.571 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.571 "subtype": "Discovery", 00:28:06.571 "listen_addresses": [], 00:28:06.571 "allow_any_host": true, 00:28:06.571 "hosts": [] 00:28:06.571 }, 00:28:06.571 { 00:28:06.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.571 "subtype": "NVMe", 00:28:06.571 "listen_addresses": [ 00:28:06.571 { 00:28:06.571 "transport": "TCP", 00:28:06.571 "trtype": "TCP", 00:28:06.571 "adrfam": "IPv4", 00:28:06.571 "traddr": "10.0.0.2", 00:28:06.571 "trsvcid": "4420" 00:28:06.571 } 00:28:06.571 ], 00:28:06.571 "allow_any_host": true, 00:28:06.571 "hosts": [], 00:28:06.571 "serial_number": "SPDK00000000000001", 00:28:06.571 "model_number": "SPDK bdev Controller", 00:28:06.571 "max_namespaces": 2, 00:28:06.571 "min_cntlid": 1, 00:28:06.571 "max_cntlid": 65519, 00:28:06.571 "namespaces": [ 00:28:06.571 { 00:28:06.571 "nsid": 1, 00:28:06.571 "bdev_name": "Malloc0", 00:28:06.571 "name": "Malloc0", 00:28:06.571 "nguid": "E8BB5993EDFD4A1CB8F9F677D44F603F", 00:28:06.571 "uuid": "e8bb5993-edfd-4a1c-b8f9-f677d44f603f" 00:28:06.571 }, 00:28:06.571 { 00:28:06.571 "nsid": 2, 00:28:06.571 "bdev_name": "Malloc1", 00:28:06.571 "name": "Malloc1", 00:28:06.571 "nguid": "4C14B59472974AB1810D161DD9844E85", 00:28:06.571 "uuid": "4c14b594-7297-4ab1-810d-161dd9844e85" 00:28:06.571 } 00:28:06.571 ] 00:28:06.571 } 00:28:06.571 ] 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@43 -- # wait 2971975 00:28:06.571 23:24:29 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.571 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.571 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:06.571 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.571 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.571 23:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:06.571 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:28:06.571 23:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:06.571 23:24:29 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:06.571 23:24:29 -- host/aer.sh@51 -- # nvmftestfini 00:28:06.571 23:24:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:06.571 23:24:29 -- nvmf/common.sh@116 -- # sync 00:28:06.571 23:24:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:06.571 23:24:29 -- nvmf/common.sh@119 -- # set +e 00:28:06.571 23:24:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:06.571 23:24:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:06.571 rmmod nvme_tcp 00:28:06.571 rmmod nvme_fabrics 00:28:06.571 rmmod nvme_keyring 00:28:06.831 23:24:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:06.831 23:24:29 -- nvmf/common.sh@123 -- # set -e 00:28:06.831 23:24:29 -- nvmf/common.sh@124 -- # return 0 00:28:06.831 23:24:29 -- nvmf/common.sh@477 -- # '[' -n 2971727 ']' 00:28:06.831 23:24:29 -- nvmf/common.sh@478 -- # killprocess 2971727 00:28:06.831 23:24:29 -- common/autotest_common.sh@926 -- # '[' -z 2971727 ']' 00:28:06.831 23:24:29 -- common/autotest_common.sh@930 -- # kill -0 2971727 00:28:06.831 23:24:29 -- common/autotest_common.sh@931 -- # uname 00:28:06.831 23:24:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:06.831 23:24:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2971727 00:28:06.831 23:24:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:06.831 23:24:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:06.831 23:24:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2971727' 00:28:06.831 killing process with pid 2971727 00:28:06.831 23:24:29 -- common/autotest_common.sh@945 -- # kill 2971727 00:28:06.831 [2024-06-07 23:24:29.317409] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:06.831 23:24:29 -- common/autotest_common.sh@950 -- # wait 2971727 00:28:06.831 23:24:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:06.831 23:24:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:06.831 23:24:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:06.831 23:24:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.831 23:24:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:06.831 23:24:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.831 23:24:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.831 23:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.373 23:24:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:09.373 00:28:09.373 real 0m10.777s 00:28:09.373 user 0m7.759s 00:28:09.373 sys 0m5.620s 00:28:09.373 23:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.373 23:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:09.373 ************************************ 00:28:09.373 END TEST nvmf_aer 00:28:09.373 ************************************ 00:28:09.373 23:24:31 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.373 23:24:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:09.373 23:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:09.373 23:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:09.373 ************************************ 00:28:09.373 START TEST nvmf_async_init 00:28:09.373 ************************************ 00:28:09.373 23:24:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:09.373 * Looking for test storage... 00:28:09.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.373 23:24:31 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.373 23:24:31 -- nvmf/common.sh@7 -- # uname -s 00:28:09.373 23:24:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.373 23:24:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.373 23:24:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.373 23:24:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.373 23:24:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.373 23:24:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.373 23:24:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.373 23:24:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.373 23:24:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.373 23:24:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.373 23:24:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.373 23:24:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:09.373 23:24:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.373 23:24:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.373 23:24:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.373 23:24:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.373 23:24:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.373 23:24:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.373 23:24:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.373 23:24:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.373 23:24:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.373 23:24:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.373 23:24:31 -- paths/export.sh@5 -- # export PATH 00:28:09.373 23:24:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.373 23:24:31 -- nvmf/common.sh@46 -- # : 0 00:28:09.373 23:24:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:09.373 23:24:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:09.373 23:24:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:09.373 23:24:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.373 23:24:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.373 23:24:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:09.373 23:24:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:09.373 23:24:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:09.373 23:24:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:09.373 23:24:31 -- host/async_init.sh@14 -- # null_block_size=512 00:28:09.373 23:24:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:28:09.373 23:24:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:09.373 23:24:31 -- host/async_init.sh@20 -- # uuidgen 00:28:09.373 23:24:31 -- host/async_init.sh@20 -- # tr -d - 00:28:09.373 23:24:31 -- host/async_init.sh@20 -- # nguid=b9c70d89a8cb45bab629148d771af87b 00:28:09.373 23:24:31 -- host/async_init.sh@22 -- # nvmftestinit 00:28:09.373 23:24:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:09.373 23:24:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.373 23:24:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:09.373 23:24:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:09.373 23:24:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:09.373 23:24:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.373 23:24:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.373 23:24:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.373 23:24:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:09.373 23:24:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:09.373 23:24:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:09.373 23:24:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.540 23:24:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:17.540 23:24:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:17.540 23:24:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:17.540 23:24:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:17.540 23:24:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:17.540 23:24:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:17.540 23:24:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:17.540 23:24:38 -- nvmf/common.sh@294 -- # net_devs=() 00:28:17.540 23:24:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:17.540 23:24:38 -- nvmf/common.sh@295 -- # e810=() 00:28:17.540 23:24:38 -- nvmf/common.sh@295 -- # local -ga e810 00:28:17.540 23:24:38 -- nvmf/common.sh@296 -- # x722=() 00:28:17.540 23:24:38 -- nvmf/common.sh@296 -- # local -ga x722 00:28:17.540 23:24:38 -- nvmf/common.sh@297 -- # mlx=() 00:28:17.540 23:24:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:17.540 23:24:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.540 23:24:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.541 23:24:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.541 23:24:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.541 23:24:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:17.541 23:24:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:17.541 23:24:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:17.541 23:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:17.541 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:17.541 23:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:17.541 23:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:17.541 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:17.541 23:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:17.541 23:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.541 23:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.541 23:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:17.541 Found net devices under 0000:31:00.0: cvl_0_0 00:28:17.541 23:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.541 23:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:17.541 23:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.541 23:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.541 23:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:17.541 Found net devices under 0000:31:00.1: cvl_0_1 00:28:17.541 23:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.541 23:24:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:17.541 23:24:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:17.541 23:24:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:17.541 23:24:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.541 23:24:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.541 23:24:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.541 23:24:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:17.541 23:24:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.541 23:24:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.541 23:24:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:17.541 23:24:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.541 23:24:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.541 23:24:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:17.541 23:24:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:17.541 23:24:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.541 23:24:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.541 23:24:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.541 23:24:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.541 23:24:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:17.541 23:24:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.541 23:24:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.541 23:24:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.541 23:24:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:17.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:28:17.541 00:28:17.541 --- 10.0.0.2 ping statistics --- 00:28:17.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.541 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:28:17.541 23:24:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:28:17.541 00:28:17.541 --- 10.0.0.1 ping statistics --- 00:28:17.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.541 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:28:17.541 23:24:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.541 23:24:39 -- nvmf/common.sh@410 -- # return 0 00:28:17.541 23:24:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:17.541 23:24:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.541 23:24:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:17.541 23:24:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:17.541 23:24:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.541 23:24:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:17.541 23:24:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:17.541 23:24:39 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:17.541 23:24:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:17.541 23:24:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:17.541 23:24:39 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 23:24:39 -- nvmf/common.sh@469 -- # nvmfpid=2976185 00:28:17.541 23:24:39 -- nvmf/common.sh@470 -- # waitforlisten 2976185 00:28:17.541 23:24:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:17.541 23:24:39 -- common/autotest_common.sh@819 -- # '[' -z 2976185 ']' 00:28:17.541 23:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.541 23:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:17.541 23:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.541 23:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:17.541 23:24:39 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 [2024-06-07 23:24:39.243051] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:17.541 [2024-06-07 23:24:39.243119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.541 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.541 [2024-06-07 23:24:39.316953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.541 [2024-06-07 23:24:39.354071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:17.541 [2024-06-07 23:24:39.354214] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.541 [2024-06-07 23:24:39.354224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.541 [2024-06-07 23:24:39.354232] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.541 [2024-06-07 23:24:39.354268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.541 23:24:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:17.541 23:24:40 -- common/autotest_common.sh@852 -- # return 0 00:28:17.541 23:24:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:17.541 23:24:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 23:24:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.541 23:24:40 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 [2024-06-07 23:24:40.056946] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 null0 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b9c70d89a8cb45bab629148d771af87b 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.541 [2024-06-07 23:24:40.117297] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.541 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.541 23:24:40 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:17.541 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.541 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.822 nvme0n1 00:28:17.822 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.822 23:24:40 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:17.822 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.822 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.822 [ 00:28:17.822 { 00:28:17.822 "name": "nvme0n1", 00:28:17.822 "aliases": [ 00:28:17.822 "b9c70d89-a8cb-45ba-b629-148d771af87b" 00:28:17.822 ], 00:28:17.822 "product_name": "NVMe disk", 00:28:17.822 "block_size": 512, 00:28:17.822 "num_blocks": 2097152, 00:28:17.822 "uuid": "b9c70d89-a8cb-45ba-b629-148d771af87b", 00:28:17.822 "assigned_rate_limits": { 00:28:17.822 "rw_ios_per_sec": 0, 00:28:17.822 "rw_mbytes_per_sec": 0, 00:28:17.822 "r_mbytes_per_sec": 0, 00:28:17.822 "w_mbytes_per_sec": 0 00:28:17.822 }, 00:28:17.822 "claimed": false, 00:28:17.822 "zoned": false, 00:28:17.822 "supported_io_types": { 00:28:17.822 "read": true, 00:28:17.822 "write": true, 00:28:17.822 "unmap": false, 00:28:17.822 "write_zeroes": true, 00:28:17.822 "flush": true, 00:28:17.822 "reset": true, 00:28:17.822 "compare": true, 00:28:17.822 "compare_and_write": true, 00:28:17.822 "abort": true, 00:28:17.822 "nvme_admin": true, 00:28:17.822 "nvme_io": true 00:28:17.822 }, 00:28:17.822 "driver_specific": { 00:28:17.822 "nvme": [ 00:28:17.822 { 00:28:17.822 "trid": { 00:28:17.822 "trtype": "TCP", 00:28:17.822 "adrfam": "IPv4", 00:28:17.822 "traddr": "10.0.0.2", 00:28:17.822 "trsvcid": "4420", 00:28:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:17.822 }, 00:28:17.822 "ctrlr_data": { 00:28:17.822 "cntlid": 1, 00:28:17.822 "vendor_id": "0x8086", 00:28:17.822 "model_number": "SPDK bdev Controller", 00:28:17.822 "serial_number": "00000000000000000000", 00:28:17.822 "firmware_revision": "24.01.1", 00:28:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.822 "oacs": { 00:28:17.822 "security": 0, 00:28:17.822 "format": 0, 00:28:17.822 "firmware": 0, 00:28:17.823 "ns_manage": 0 00:28:17.823 }, 00:28:17.823 "multi_ctrlr": true, 00:28:17.823 "ana_reporting": false 00:28:17.823 }, 00:28:17.823 "vs": { 00:28:17.823 "nvme_version": "1.3" 00:28:17.823 }, 00:28:17.823 "ns_data": { 00:28:17.823 "id": 1, 00:28:17.823 "can_share": true 00:28:17.823 } 00:28:17.823 } 00:28:17.823 ], 00:28:17.823 "mp_policy": "active_passive" 00:28:17.823 } 00:28:17.823 } 00:28:17.823 ] 00:28:17.823 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.823 23:24:40 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:17.823 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.823 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:17.823 [2024-06-07 23:24:40.387130] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:17.823 [2024-06-07 23:24:40.387189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7d6f0 (9): Bad file descriptor 00:28:18.083 [2024-06-07 23:24:40.519338] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 [ 00:28:18.083 { 00:28:18.083 "name": "nvme0n1", 00:28:18.083 "aliases": [ 00:28:18.083 "b9c70d89-a8cb-45ba-b629-148d771af87b" 00:28:18.083 ], 00:28:18.083 "product_name": "NVMe disk", 00:28:18.083 "block_size": 512, 00:28:18.083 "num_blocks": 2097152, 00:28:18.083 "uuid": "b9c70d89-a8cb-45ba-b629-148d771af87b", 00:28:18.083 "assigned_rate_limits": { 00:28:18.083 "rw_ios_per_sec": 0, 00:28:18.083 "rw_mbytes_per_sec": 0, 00:28:18.083 "r_mbytes_per_sec": 0, 00:28:18.083 "w_mbytes_per_sec": 0 00:28:18.083 }, 00:28:18.083 "claimed": false, 00:28:18.083 "zoned": false, 00:28:18.083 "supported_io_types": { 00:28:18.083 "read": true, 00:28:18.083 "write": true, 00:28:18.083 "unmap": false, 00:28:18.083 "write_zeroes": true, 00:28:18.083 "flush": true, 00:28:18.083 "reset": true, 00:28:18.083 "compare": true, 00:28:18.083 "compare_and_write": true, 00:28:18.083 "abort": true, 00:28:18.083 "nvme_admin": true, 00:28:18.083 "nvme_io": true 00:28:18.083 }, 00:28:18.083 "driver_specific": { 00:28:18.083 "nvme": [ 00:28:18.083 { 00:28:18.083 "trid": { 00:28:18.083 "trtype": "TCP", 00:28:18.083 "adrfam": "IPv4", 00:28:18.083 "traddr": "10.0.0.2", 00:28:18.083 "trsvcid": "4420", 00:28:18.083 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.083 }, 00:28:18.083 "ctrlr_data": { 00:28:18.083 "cntlid": 2, 00:28:18.083 "vendor_id": "0x8086", 00:28:18.083 "model_number": "SPDK bdev Controller", 00:28:18.083 "serial_number": "00000000000000000000", 00:28:18.083 "firmware_revision": "24.01.1", 00:28:18.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.083 "oacs": { 00:28:18.083 "security": 0, 00:28:18.083 "format": 0, 00:28:18.083 "firmware": 0, 00:28:18.083 "ns_manage": 0 00:28:18.083 }, 00:28:18.083 "multi_ctrlr": true, 00:28:18.083 "ana_reporting": false 00:28:18.083 }, 00:28:18.083 "vs": { 00:28:18.083 "nvme_version": "1.3" 00:28:18.083 }, 00:28:18.083 "ns_data": { 00:28:18.083 "id": 1, 00:28:18.083 "can_share": true 00:28:18.083 } 00:28:18.083 } 00:28:18.083 ], 00:28:18.083 "mp_policy": "active_passive" 00:28:18.083 } 00:28:18.083 } 00:28:18.083 ] 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@53 -- # mktemp 00:28:18.083 23:24:40 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lxpAn038CD 00:28:18.083 23:24:40 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:18.083 23:24:40 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lxpAn038CD 00:28:18.083 23:24:40 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 [2024-06-07 23:24:40.583747] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:18.083 [2024-06-07 23:24:40.583869] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lxpAn038CD 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lxpAn038CD 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 [2024-06-07 23:24:40.607804] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.083 nvme0n1 00:28:18.083 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.083 23:24:40 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.083 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.083 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.083 [ 00:28:18.083 { 00:28:18.083 "name": "nvme0n1", 00:28:18.083 "aliases": [ 00:28:18.083 "b9c70d89-a8cb-45ba-b629-148d771af87b" 00:28:18.083 ], 00:28:18.083 "product_name": "NVMe disk", 00:28:18.083 "block_size": 512, 00:28:18.083 "num_blocks": 2097152, 00:28:18.083 "uuid": "b9c70d89-a8cb-45ba-b629-148d771af87b", 00:28:18.083 "assigned_rate_limits": { 00:28:18.083 "rw_ios_per_sec": 0, 00:28:18.083 "rw_mbytes_per_sec": 0, 00:28:18.083 "r_mbytes_per_sec": 0, 00:28:18.083 "w_mbytes_per_sec": 0 00:28:18.083 }, 00:28:18.083 "claimed": false, 00:28:18.083 "zoned": false, 00:28:18.083 "supported_io_types": { 00:28:18.083 "read": true, 00:28:18.083 "write": true, 00:28:18.083 "unmap": false, 00:28:18.083 "write_zeroes": true, 00:28:18.083 "flush": true, 00:28:18.083 "reset": true, 00:28:18.083 "compare": true, 00:28:18.083 "compare_and_write": true, 00:28:18.083 "abort": true, 00:28:18.083 "nvme_admin": true, 00:28:18.083 "nvme_io": true 00:28:18.083 }, 00:28:18.083 "driver_specific": { 00:28:18.083 "nvme": [ 00:28:18.083 { 00:28:18.083 "trid": { 00:28:18.083 "trtype": "TCP", 00:28:18.083 "adrfam": "IPv4", 00:28:18.083 "traddr": "10.0.0.2", 00:28:18.083 "trsvcid": "4421", 00:28:18.083 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.083 }, 00:28:18.083 "ctrlr_data": { 00:28:18.083 "cntlid": 3, 00:28:18.083 "vendor_id": "0x8086", 00:28:18.083 "model_number": "SPDK bdev Controller", 00:28:18.083 "serial_number": "00000000000000000000", 00:28:18.083 "firmware_revision": "24.01.1", 00:28:18.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.083 "oacs": { 00:28:18.083 "security": 0, 00:28:18.083 "format": 0, 00:28:18.083 "firmware": 0, 00:28:18.083 "ns_manage": 0 00:28:18.083 }, 00:28:18.083 "multi_ctrlr": true, 00:28:18.083 "ana_reporting": false 00:28:18.083 }, 00:28:18.083 "vs": { 00:28:18.083 "nvme_version": "1.3" 00:28:18.083 }, 00:28:18.083 "ns_data": { 00:28:18.083 "id": 1, 00:28:18.083 "can_share": true 00:28:18.083 } 00:28:18.083 } 00:28:18.083 ], 00:28:18.083 "mp_policy": "active_passive" 00:28:18.083 } 00:28:18.083 } 00:28:18.083 ] 00:28:18.084 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.084 23:24:40 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.084 23:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.084 23:24:40 -- common/autotest_common.sh@10 -- # set +x 00:28:18.084 23:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.084 23:24:40 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lxpAn038CD 00:28:18.084 23:24:40 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:18.084 23:24:40 -- host/async_init.sh@78 -- # nvmftestfini 00:28:18.084 23:24:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:18.084 23:24:40 -- nvmf/common.sh@116 -- # sync 00:28:18.084 23:24:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:18.084 23:24:40 -- nvmf/common.sh@119 -- # set +e 00:28:18.084 23:24:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:18.084 23:24:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:18.084 rmmod nvme_tcp 00:28:18.084 rmmod nvme_fabrics 00:28:18.084 rmmod nvme_keyring 00:28:18.344 23:24:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:18.344 23:24:40 -- nvmf/common.sh@123 -- # set -e 00:28:18.344 23:24:40 -- nvmf/common.sh@124 -- # return 0 00:28:18.344 23:24:40 -- nvmf/common.sh@477 -- # '[' -n 2976185 ']' 00:28:18.344 23:24:40 -- nvmf/common.sh@478 -- # killprocess 2976185 00:28:18.344 23:24:40 -- common/autotest_common.sh@926 -- # '[' -z 2976185 ']' 00:28:18.344 23:24:40 -- common/autotest_common.sh@930 -- # kill -0 2976185 00:28:18.344 23:24:40 -- common/autotest_common.sh@931 -- # uname 00:28:18.344 23:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:18.344 23:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2976185 00:28:18.344 23:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:18.344 23:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:18.344 23:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2976185' 00:28:18.344 killing process with pid 2976185 00:28:18.344 23:24:40 -- common/autotest_common.sh@945 -- # kill 2976185 00:28:18.344 23:24:40 -- common/autotest_common.sh@950 -- # wait 2976185 00:28:18.344 23:24:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:18.344 23:24:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:18.344 23:24:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:18.344 23:24:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:18.344 23:24:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:18.344 23:24:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.344 23:24:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.344 23:24:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.889 23:24:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:20.889 00:28:20.889 real 0m11.479s 00:28:20.889 user 0m4.024s 00:28:20.889 sys 0m5.888s 00:28:20.889 23:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.889 23:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:20.889 ************************************ 00:28:20.889 END TEST nvmf_async_init 00:28:20.889 ************************************ 00:28:20.889 23:24:43 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:20.889 23:24:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:20.889 23:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.890 23:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:20.890 ************************************ 00:28:20.890 START TEST dma 00:28:20.890 ************************************ 00:28:20.890 23:24:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:20.890 * Looking for test storage... 00:28:20.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.890 23:24:43 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.890 23:24:43 -- nvmf/common.sh@7 -- # uname -s 00:28:20.890 23:24:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.890 23:24:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.890 23:24:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.890 23:24:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.890 23:24:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.890 23:24:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.890 23:24:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.890 23:24:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.890 23:24:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.890 23:24:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.890 23:24:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:20.890 23:24:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:20.890 23:24:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.890 23:24:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.890 23:24:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.890 23:24:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.890 23:24:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.890 23:24:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.890 23:24:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.890 23:24:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@5 -- # export PATH 00:28:20.890 23:24:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- nvmf/common.sh@46 -- # : 0 00:28:20.890 23:24:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:20.890 23:24:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:20.890 23:24:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.890 23:24:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.890 23:24:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:20.890 23:24:43 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:20.890 23:24:43 -- host/dma.sh@13 -- # exit 0 00:28:20.890 00:28:20.890 real 0m0.130s 00:28:20.890 user 0m0.060s 00:28:20.890 sys 0m0.079s 00:28:20.890 23:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.890 23:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:20.890 ************************************ 00:28:20.890 END TEST dma 00:28:20.890 ************************************ 00:28:20.890 23:24:43 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:20.890 23:24:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:20.890 23:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.890 23:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:20.890 ************************************ 00:28:20.890 START TEST nvmf_identify 00:28:20.890 ************************************ 00:28:20.890 23:24:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:20.890 * Looking for test storage... 00:28:20.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.890 23:24:43 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.890 23:24:43 -- nvmf/common.sh@7 -- # uname -s 00:28:20.890 23:24:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.890 23:24:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.890 23:24:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.890 23:24:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.890 23:24:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.890 23:24:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.890 23:24:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.890 23:24:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.890 23:24:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.890 23:24:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.890 23:24:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:20.890 23:24:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:20.890 23:24:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.890 23:24:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.890 23:24:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.890 23:24:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.890 23:24:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.890 23:24:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.890 23:24:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.890 23:24:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- paths/export.sh@5 -- # export PATH 00:28:20.890 23:24:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.890 23:24:43 -- nvmf/common.sh@46 -- # : 0 00:28:20.890 23:24:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:20.890 23:24:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:20.890 23:24:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.890 23:24:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.890 23:24:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:20.890 23:24:43 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.890 23:24:43 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.890 23:24:43 -- host/identify.sh@14 -- # nvmftestinit 00:28:20.890 23:24:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:20.890 23:24:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.890 23:24:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:20.890 23:24:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:20.890 23:24:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:20.890 23:24:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.890 23:24:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.890 23:24:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.890 23:24:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:20.890 23:24:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:20.890 23:24:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:20.890 23:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:27.474 23:24:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:27.474 23:24:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:27.474 23:24:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:27.474 23:24:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:27.474 23:24:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:27.474 23:24:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:27.474 23:24:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:27.474 23:24:50 -- nvmf/common.sh@294 -- # net_devs=() 00:28:27.474 23:24:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:27.474 23:24:50 -- nvmf/common.sh@295 -- # e810=() 00:28:27.474 23:24:50 -- nvmf/common.sh@295 -- # local -ga e810 00:28:27.474 23:24:50 -- nvmf/common.sh@296 -- # x722=() 00:28:27.474 23:24:50 -- nvmf/common.sh@296 -- # local -ga x722 00:28:27.474 23:24:50 -- nvmf/common.sh@297 -- # mlx=() 00:28:27.474 23:24:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:27.474 23:24:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.474 23:24:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:27.474 23:24:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:27.474 23:24:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:27.474 23:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:27.474 23:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:27.474 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:27.474 23:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:27.474 23:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:27.474 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:27.474 23:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:27.474 23:24:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:27.474 23:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:27.474 23:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.474 23:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:27.474 23:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.474 23:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:27.474 Found net devices under 0000:31:00.0: cvl_0_0 00:28:27.474 23:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.475 23:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:27.475 23:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.475 23:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:27.475 23:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.475 23:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:27.475 Found net devices under 0000:31:00.1: cvl_0_1 00:28:27.475 23:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.475 23:24:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:27.475 23:24:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:27.475 23:24:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:27.475 23:24:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:27.475 23:24:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:27.475 23:24:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.475 23:24:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.475 23:24:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.475 23:24:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:27.475 23:24:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.475 23:24:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.475 23:24:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:27.475 23:24:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.475 23:24:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.475 23:24:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:27.475 23:24:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:27.475 23:24:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.475 23:24:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.735 23:24:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.735 23:24:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.735 23:24:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:27.735 23:24:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.995 23:24:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.995 23:24:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.995 23:24:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:27.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.780 ms 00:28:27.995 00:28:27.995 --- 10.0.0.2 ping statistics --- 00:28:27.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.995 rtt min/avg/max/mdev = 0.780/0.780/0.780/0.000 ms 00:28:27.995 23:24:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:28:27.995 00:28:27.996 --- 10.0.0.1 ping statistics --- 00:28:27.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.996 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:28:27.996 23:24:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.996 23:24:50 -- nvmf/common.sh@410 -- # return 0 00:28:27.996 23:24:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:27.996 23:24:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.996 23:24:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:27.996 23:24:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:27.996 23:24:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.996 23:24:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:27.996 23:24:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:27.996 23:24:50 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:27.996 23:24:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:27.996 23:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.996 23:24:50 -- host/identify.sh@19 -- # nvmfpid=2980906 00:28:27.996 23:24:50 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.996 23:24:50 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:27.996 23:24:50 -- host/identify.sh@23 -- # waitforlisten 2980906 00:28:27.996 23:24:50 -- common/autotest_common.sh@819 -- # '[' -z 2980906 ']' 00:28:27.996 23:24:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.996 23:24:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:27.996 23:24:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.996 23:24:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:27.996 23:24:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.996 [2024-06-07 23:24:50.551166] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:27.996 [2024-06-07 23:24:50.551232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.996 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.996 [2024-06-07 23:24:50.623178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.996 [2024-06-07 23:24:50.663491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:27.996 [2024-06-07 23:24:50.663643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.996 [2024-06-07 23:24:50.663655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.996 [2024-06-07 23:24:50.663663] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.996 [2024-06-07 23:24:50.663841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.996 [2024-06-07 23:24:50.663966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.996 [2024-06-07 23:24:50.664124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.996 [2024-06-07 23:24:50.664126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.938 23:24:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:28.938 23:24:51 -- common/autotest_common.sh@852 -- # return 0 00:28:28.938 23:24:51 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 [2024-06-07 23:24:51.333424] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:28.938 23:24:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 23:24:51 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 Malloc0 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 [2024-06-07 23:24:51.432892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.938 23:24:51 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:28.938 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:28.938 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 [2024-06-07 23:24:51.456750] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:28.938 [ 00:28:28.938 { 00:28:28.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:28.938 "subtype": "Discovery", 00:28:28.938 "listen_addresses": [ 00:28:28.938 { 00:28:28.938 "transport": "TCP", 00:28:28.938 "trtype": "TCP", 00:28:28.938 "adrfam": "IPv4", 00:28:28.938 "traddr": "10.0.0.2", 00:28:28.938 "trsvcid": "4420" 00:28:28.938 } 00:28:28.938 ], 00:28:28.938 "allow_any_host": true, 00:28:28.938 "hosts": [] 00:28:28.938 }, 00:28:28.938 { 00:28:28.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.938 "subtype": "NVMe", 00:28:28.938 "listen_addresses": [ 00:28:28.938 { 00:28:28.938 "transport": "TCP", 00:28:28.938 "trtype": "TCP", 00:28:28.938 "adrfam": "IPv4", 00:28:28.938 "traddr": "10.0.0.2", 00:28:28.938 "trsvcid": "4420" 00:28:28.938 } 00:28:28.938 ], 00:28:28.938 "allow_any_host": true, 00:28:28.938 "hosts": [], 00:28:28.938 "serial_number": "SPDK00000000000001", 00:28:28.938 "model_number": "SPDK bdev Controller", 00:28:28.938 "max_namespaces": 32, 00:28:28.938 "min_cntlid": 1, 00:28:28.938 "max_cntlid": 65519, 00:28:28.938 "namespaces": [ 00:28:28.938 { 00:28:28.938 "nsid": 1, 00:28:28.939 "bdev_name": "Malloc0", 00:28:28.939 "name": "Malloc0", 00:28:28.939 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:28.939 "eui64": "ABCDEF0123456789", 00:28:28.939 "uuid": "97884c9a-3afc-44cc-8236-9ea70d9b98dd" 00:28:28.939 } 00:28:28.939 ] 00:28:28.939 } 00:28:28.939 ] 00:28:28.939 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:28.939 23:24:51 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:28.939 [2024-06-07 23:24:51.494313] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:28.939 [2024-06-07 23:24:51.494367] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2981002 ] 00:28:28.939 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.939 [2024-06-07 23:24:51.526868] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:28.939 [2024-06-07 23:24:51.526919] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:28.939 [2024-06-07 23:24:51.526925] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:28.939 [2024-06-07 23:24:51.526936] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:28.939 [2024-06-07 23:24:51.526942] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:28.939 [2024-06-07 23:24:51.530273] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:28.939 [2024-06-07 23:24:51.530306] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb2cfd0 0 00:28:28.939 [2024-06-07 23:24:51.538249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:28.939 [2024-06-07 23:24:51.538261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:28.939 [2024-06-07 23:24:51.538266] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:28.939 [2024-06-07 23:24:51.538273] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:28.939 [2024-06-07 23:24:51.538308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.538314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.538319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.538332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:28.939 [2024-06-07 23:24:51.538347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.546252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.546261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.546265] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.546281] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:28.939 [2024-06-07 23:24:51.546288] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:28.939 [2024-06-07 23:24:51.546294] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:28.939 [2024-06-07 23:24:51.546308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.546323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.546335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.546561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.546568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.546571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.546582] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:28.939 [2024-06-07 23:24:51.546589] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:28.939 [2024-06-07 23:24:51.546596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546599] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.546609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.546620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.546829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.546836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.546839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.546848] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:28.939 [2024-06-07 23:24:51.546856] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.546865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.546872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.546879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.546889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.547108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.547114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.547118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.547126] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.547135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.547149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.547158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.547366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.547373] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.547376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.547384] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:28.939 [2024-06-07 23:24:51.547389] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.547396] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.547501] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:28.939 [2024-06-07 23:24:51.547506] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.547515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547522] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.547529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.547539] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.547745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.547751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.547754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547758] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.547765] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:28.939 [2024-06-07 23:24:51.547774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.547781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.939 [2024-06-07 23:24:51.547788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.939 [2024-06-07 23:24:51.547797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.939 [2024-06-07 23:24:51.548002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.939 [2024-06-07 23:24:51.548008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.939 [2024-06-07 23:24:51.548011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.939 [2024-06-07 23:24:51.548015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.939 [2024-06-07 23:24:51.548019] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:28.940 [2024-06-07 23:24:51.548024] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548031] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:28.940 [2024-06-07 23:24:51.548038] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.940 [2024-06-07 23:24:51.548070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.940 [2024-06-07 23:24:51.548301] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.940 [2024-06-07 23:24:51.548307] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.940 [2024-06-07 23:24:51.548311] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548315] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb2cfd0): datao=0, datal=4096, cccid=0 00:28:28.940 [2024-06-07 23:24:51.548320] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9a180) on tqpair(0xb2cfd0): expected_datao=0, payload_size=4096 00:28:28.940 [2024-06-07 23:24:51.548353] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548358] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.940 [2024-06-07 23:24:51.548517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.940 [2024-06-07 23:24:51.548520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.940 [2024-06-07 23:24:51.548531] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:28.940 [2024-06-07 23:24:51.548539] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:28.940 [2024-06-07 23:24:51.548543] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:28.940 [2024-06-07 23:24:51.548550] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:28.940 [2024-06-07 23:24:51.548554] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:28.940 [2024-06-07 23:24:51.548559] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548567] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:28.940 [2024-06-07 23:24:51.548598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.940 [2024-06-07 23:24:51.548825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.940 [2024-06-07 23:24:51.548831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.940 [2024-06-07 23:24:51.548835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a180) on tqpair=0xb2cfd0 00:28:28.940 [2024-06-07 23:24:51.548846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.940 [2024-06-07 23:24:51.548865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548871] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.940 [2024-06-07 23:24:51.548883] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.940 [2024-06-07 23:24:51.548901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.940 [2024-06-07 23:24:51.548918] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548928] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:28.940 [2024-06-07 23:24:51.548934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.548943] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.548950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.940 [2024-06-07 23:24:51.548961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a180, cid 0, qid 0 00:28:28.940 [2024-06-07 23:24:51.548966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a2e0, cid 1, qid 0 00:28:28.940 [2024-06-07 23:24:51.548970] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a440, cid 2, qid 0 00:28:28.940 [2024-06-07 23:24:51.548975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a5a0, cid 3, qid 0 00:28:28.940 [2024-06-07 23:24:51.548979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a700, cid 4, qid 0 00:28:28.940 [2024-06-07 23:24:51.549206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.940 [2024-06-07 23:24:51.549213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.940 [2024-06-07 23:24:51.549216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.549220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a700) on tqpair=0xb2cfd0 00:28:28.940 [2024-06-07 23:24:51.549225] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:28.940 [2024-06-07 23:24:51.549230] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:28.940 [2024-06-07 23:24:51.549239] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.549246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.549250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.549256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.940 [2024-06-07 23:24:51.549266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a700, cid 4, qid 0 00:28:28.940 [2024-06-07 23:24:51.549467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.940 [2024-06-07 23:24:51.549473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.940 [2024-06-07 23:24:51.549477] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.549480] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb2cfd0): datao=0, datal=4096, cccid=4 00:28:28.940 [2024-06-07 23:24:51.549484] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9a700) on tqpair(0xb2cfd0): expected_datao=0, payload_size=4096 00:28:28.940 [2024-06-07 23:24:51.549512] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.549516] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.940 [2024-06-07 23:24:51.593258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.940 [2024-06-07 23:24:51.593262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a700) on tqpair=0xb2cfd0 00:28:28.940 [2024-06-07 23:24:51.593278] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:28.940 [2024-06-07 23:24:51.593297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593302] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.593312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.940 [2024-06-07 23:24:51.593324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb2cfd0) 00:28:28.940 [2024-06-07 23:24:51.593337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.940 [2024-06-07 23:24:51.593352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a700, cid 4, qid 0 00:28:28.940 [2024-06-07 23:24:51.593358] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a860, cid 5, qid 0 00:28:28.940 [2024-06-07 23:24:51.593609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.940 [2024-06-07 23:24:51.593615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.940 [2024-06-07 23:24:51.593619] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.940 [2024-06-07 23:24:51.593622] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb2cfd0): datao=0, datal=1024, cccid=4 00:28:28.941 [2024-06-07 23:24:51.593626] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9a700) on tqpair(0xb2cfd0): expected_datao=0, payload_size=1024 00:28:28.941 [2024-06-07 23:24:51.593634] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.941 [2024-06-07 23:24:51.593637] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.941 [2024-06-07 23:24:51.593643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.941 [2024-06-07 23:24:51.593649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.941 [2024-06-07 23:24:51.593652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.941 [2024-06-07 23:24:51.593656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a860) on tqpair=0xb2cfd0 00:28:29.204 [2024-06-07 23:24:51.634449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.204 [2024-06-07 23:24:51.634458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.204 [2024-06-07 23:24:51.634462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a700) on tqpair=0xb2cfd0 00:28:29.204 [2024-06-07 23:24:51.634476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb2cfd0) 00:28:29.204 [2024-06-07 23:24:51.634489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.204 [2024-06-07 23:24:51.634504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a700, cid 4, qid 0 00:28:29.204 [2024-06-07 23:24:51.634693] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.204 [2024-06-07 23:24:51.634699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.204 [2024-06-07 23:24:51.634703] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634706] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb2cfd0): datao=0, datal=3072, cccid=4 00:28:29.204 [2024-06-07 23:24:51.634711] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9a700) on tqpair(0xb2cfd0): expected_datao=0, payload_size=3072 00:28:29.204 [2024-06-07 23:24:51.634741] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634745] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.204 [2024-06-07 23:24:51.634908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.204 [2024-06-07 23:24:51.634911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a700) on tqpair=0xb2cfd0 00:28:29.204 [2024-06-07 23:24:51.634926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.634933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb2cfd0) 00:28:29.204 [2024-06-07 23:24:51.634940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.204 [2024-06-07 23:24:51.634953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a700, cid 4, qid 0 00:28:29.204 [2024-06-07 23:24:51.635188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.204 [2024-06-07 23:24:51.635195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.204 [2024-06-07 23:24:51.635198] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.635201] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb2cfd0): datao=0, datal=8, cccid=4 00:28:29.204 [2024-06-07 23:24:51.635206] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb9a700) on tqpair(0xb2cfd0): expected_datao=0, payload_size=8 00:28:29.204 [2024-06-07 23:24:51.635213] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.635216] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.676451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.204 [2024-06-07 23:24:51.676463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.204 [2024-06-07 23:24:51.676466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.204 [2024-06-07 23:24:51.676470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a700) on tqpair=0xb2cfd0 00:28:29.204 ===================================================== 00:28:29.204 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:29.204 ===================================================== 00:28:29.204 Controller Capabilities/Features 00:28:29.204 ================================ 00:28:29.204 Vendor ID: 0000 00:28:29.204 Subsystem Vendor ID: 0000 00:28:29.204 Serial Number: .................... 00:28:29.204 Model Number: ........................................ 00:28:29.204 Firmware Version: 24.01.1 00:28:29.204 Recommended Arb Burst: 0 00:28:29.204 IEEE OUI Identifier: 00 00 00 00:28:29.204 Multi-path I/O 00:28:29.204 May have multiple subsystem ports: No 00:28:29.204 May have multiple controllers: No 00:28:29.204 Associated with SR-IOV VF: No 00:28:29.204 Max Data Transfer Size: 131072 00:28:29.204 Max Number of Namespaces: 0 00:28:29.204 Max Number of I/O Queues: 1024 00:28:29.204 NVMe Specification Version (VS): 1.3 00:28:29.204 NVMe Specification Version (Identify): 1.3 00:28:29.204 Maximum Queue Entries: 128 00:28:29.204 Contiguous Queues Required: Yes 00:28:29.204 Arbitration Mechanisms Supported 00:28:29.204 Weighted Round Robin: Not Supported 00:28:29.204 Vendor Specific: Not Supported 00:28:29.204 Reset Timeout: 15000 ms 00:28:29.204 Doorbell Stride: 4 bytes 00:28:29.204 NVM Subsystem Reset: Not Supported 00:28:29.204 Command Sets Supported 00:28:29.204 NVM Command Set: Supported 00:28:29.204 Boot Partition: Not Supported 00:28:29.204 Memory Page Size Minimum: 4096 bytes 00:28:29.204 Memory Page Size Maximum: 4096 bytes 00:28:29.204 Persistent Memory Region: Not Supported 00:28:29.204 Optional Asynchronous Events Supported 00:28:29.204 Namespace Attribute Notices: Not Supported 00:28:29.204 Firmware Activation Notices: Not Supported 00:28:29.204 ANA Change Notices: Not Supported 00:28:29.204 PLE Aggregate Log Change Notices: Not Supported 00:28:29.204 LBA Status Info Alert Notices: Not Supported 00:28:29.204 EGE Aggregate Log Change Notices: Not Supported 00:28:29.204 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.204 Zone Descriptor Change Notices: Not Supported 00:28:29.204 Discovery Log Change Notices: Supported 00:28:29.204 Controller Attributes 00:28:29.204 128-bit Host Identifier: Not Supported 00:28:29.204 Non-Operational Permissive Mode: Not Supported 00:28:29.204 NVM Sets: Not Supported 00:28:29.204 Read Recovery Levels: Not Supported 00:28:29.204 Endurance Groups: Not Supported 00:28:29.204 Predictable Latency Mode: Not Supported 00:28:29.204 Traffic Based Keep ALive: Not Supported 00:28:29.204 Namespace Granularity: Not Supported 00:28:29.204 SQ Associations: Not Supported 00:28:29.204 UUID List: Not Supported 00:28:29.204 Multi-Domain Subsystem: Not Supported 00:28:29.204 Fixed Capacity Management: Not Supported 00:28:29.204 Variable Capacity Management: Not Supported 00:28:29.204 Delete Endurance Group: Not Supported 00:28:29.204 Delete NVM Set: Not Supported 00:28:29.204 Extended LBA Formats Supported: Not Supported 00:28:29.204 Flexible Data Placement Supported: Not Supported 00:28:29.204 00:28:29.205 Controller Memory Buffer Support 00:28:29.205 ================================ 00:28:29.205 Supported: No 00:28:29.205 00:28:29.205 Persistent Memory Region Support 00:28:29.205 ================================ 00:28:29.205 Supported: No 00:28:29.205 00:28:29.205 Admin Command Set Attributes 00:28:29.205 ============================ 00:28:29.205 Security Send/Receive: Not Supported 00:28:29.205 Format NVM: Not Supported 00:28:29.205 Firmware Activate/Download: Not Supported 00:28:29.205 Namespace Management: Not Supported 00:28:29.205 Device Self-Test: Not Supported 00:28:29.205 Directives: Not Supported 00:28:29.205 NVMe-MI: Not Supported 00:28:29.205 Virtualization Management: Not Supported 00:28:29.205 Doorbell Buffer Config: Not Supported 00:28:29.205 Get LBA Status Capability: Not Supported 00:28:29.205 Command & Feature Lockdown Capability: Not Supported 00:28:29.205 Abort Command Limit: 1 00:28:29.205 Async Event Request Limit: 4 00:28:29.205 Number of Firmware Slots: N/A 00:28:29.205 Firmware Slot 1 Read-Only: N/A 00:28:29.205 Firmware Activation Without Reset: N/A 00:28:29.205 Multiple Update Detection Support: N/A 00:28:29.205 Firmware Update Granularity: No Information Provided 00:28:29.205 Per-Namespace SMART Log: No 00:28:29.205 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.205 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:29.205 Command Effects Log Page: Not Supported 00:28:29.205 Get Log Page Extended Data: Supported 00:28:29.205 Telemetry Log Pages: Not Supported 00:28:29.205 Persistent Event Log Pages: Not Supported 00:28:29.205 Supported Log Pages Log Page: May Support 00:28:29.205 Commands Supported & Effects Log Page: Not Supported 00:28:29.205 Feature Identifiers & Effects Log Page:May Support 00:28:29.205 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.205 Data Area 4 for Telemetry Log: Not Supported 00:28:29.205 Error Log Page Entries Supported: 128 00:28:29.205 Keep Alive: Not Supported 00:28:29.205 00:28:29.205 NVM Command Set Attributes 00:28:29.205 ========================== 00:28:29.205 Submission Queue Entry Size 00:28:29.205 Max: 1 00:28:29.205 Min: 1 00:28:29.205 Completion Queue Entry Size 00:28:29.205 Max: 1 00:28:29.205 Min: 1 00:28:29.205 Number of Namespaces: 0 00:28:29.205 Compare Command: Not Supported 00:28:29.205 Write Uncorrectable Command: Not Supported 00:28:29.205 Dataset Management Command: Not Supported 00:28:29.205 Write Zeroes Command: Not Supported 00:28:29.205 Set Features Save Field: Not Supported 00:28:29.205 Reservations: Not Supported 00:28:29.205 Timestamp: Not Supported 00:28:29.205 Copy: Not Supported 00:28:29.205 Volatile Write Cache: Not Present 00:28:29.205 Atomic Write Unit (Normal): 1 00:28:29.205 Atomic Write Unit (PFail): 1 00:28:29.205 Atomic Compare & Write Unit: 1 00:28:29.205 Fused Compare & Write: Supported 00:28:29.205 Scatter-Gather List 00:28:29.205 SGL Command Set: Supported 00:28:29.205 SGL Keyed: Supported 00:28:29.205 SGL Bit Bucket Descriptor: Not Supported 00:28:29.205 SGL Metadata Pointer: Not Supported 00:28:29.205 Oversized SGL: Not Supported 00:28:29.205 SGL Metadata Address: Not Supported 00:28:29.205 SGL Offset: Supported 00:28:29.205 Transport SGL Data Block: Not Supported 00:28:29.205 Replay Protected Memory Block: Not Supported 00:28:29.205 00:28:29.205 Firmware Slot Information 00:28:29.205 ========================= 00:28:29.205 Active slot: 0 00:28:29.205 00:28:29.205 00:28:29.205 Error Log 00:28:29.205 ========= 00:28:29.205 00:28:29.205 Active Namespaces 00:28:29.205 ================= 00:28:29.205 Discovery Log Page 00:28:29.205 ================== 00:28:29.205 Generation Counter: 2 00:28:29.205 Number of Records: 2 00:28:29.205 Record Format: 0 00:28:29.205 00:28:29.205 Discovery Log Entry 0 00:28:29.205 ---------------------- 00:28:29.205 Transport Type: 3 (TCP) 00:28:29.205 Address Family: 1 (IPv4) 00:28:29.205 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:29.205 Entry Flags: 00:28:29.205 Duplicate Returned Information: 1 00:28:29.205 Explicit Persistent Connection Support for Discovery: 1 00:28:29.205 Transport Requirements: 00:28:29.205 Secure Channel: Not Required 00:28:29.205 Port ID: 0 (0x0000) 00:28:29.205 Controller ID: 65535 (0xffff) 00:28:29.205 Admin Max SQ Size: 128 00:28:29.205 Transport Service Identifier: 4420 00:28:29.205 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:29.205 Transport Address: 10.0.0.2 00:28:29.205 Discovery Log Entry 1 00:28:29.205 ---------------------- 00:28:29.205 Transport Type: 3 (TCP) 00:28:29.205 Address Family: 1 (IPv4) 00:28:29.205 Subsystem Type: 2 (NVM Subsystem) 00:28:29.205 Entry Flags: 00:28:29.205 Duplicate Returned Information: 0 00:28:29.205 Explicit Persistent Connection Support for Discovery: 0 00:28:29.205 Transport Requirements: 00:28:29.205 Secure Channel: Not Required 00:28:29.205 Port ID: 0 (0x0000) 00:28:29.205 Controller ID: 65535 (0xffff) 00:28:29.205 Admin Max SQ Size: 128 00:28:29.205 Transport Service Identifier: 4420 00:28:29.205 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:29.205 Transport Address: 10.0.0.2 [2024-06-07 23:24:51.676556] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:29.205 [2024-06-07 23:24:51.676569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.205 [2024-06-07 23:24:51.676576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.205 [2024-06-07 23:24:51.676581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.205 [2024-06-07 23:24:51.676587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.205 [2024-06-07 23:24:51.676598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.676602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.676605] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb2cfd0) 00:28:29.205 [2024-06-07 23:24:51.676612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.205 [2024-06-07 23:24:51.676626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a5a0, cid 3, qid 0 00:28:29.205 [2024-06-07 23:24:51.676856] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.205 [2024-06-07 23:24:51.676863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.205 [2024-06-07 23:24:51.676866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.676870] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a5a0) on tqpair=0xb2cfd0 00:28:29.205 [2024-06-07 23:24:51.676876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.676880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.676884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb2cfd0) 00:28:29.205 [2024-06-07 23:24:51.676890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.205 [2024-06-07 23:24:51.676905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a5a0, cid 3, qid 0 00:28:29.205 [2024-06-07 23:24:51.677124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.205 [2024-06-07 23:24:51.677130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.205 [2024-06-07 23:24:51.677134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.677137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a5a0) on tqpair=0xb2cfd0 00:28:29.205 [2024-06-07 23:24:51.677142] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:29.205 [2024-06-07 23:24:51.677147] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:29.205 [2024-06-07 23:24:51.677156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.677160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.677163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb2cfd0) 00:28:29.205 [2024-06-07 23:24:51.677170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.205 [2024-06-07 23:24:51.677180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb9a5a0, cid 3, qid 0 00:28:29.205 [2024-06-07 23:24:51.681251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.205 [2024-06-07 23:24:51.681260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.205 [2024-06-07 23:24:51.681264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.205 [2024-06-07 23:24:51.681267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb9a5a0) on tqpair=0xb2cfd0 00:28:29.205 [2024-06-07 23:24:51.681276] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:29.205 00:28:29.206 23:24:51 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:29.206 [2024-06-07 23:24:51.718985] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:29.206 [2024-06-07 23:24:51.719042] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2981019 ] 00:28:29.206 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.206 [2024-06-07 23:24:51.750796] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:29.206 [2024-06-07 23:24:51.750840] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:29.206 [2024-06-07 23:24:51.750845] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:29.206 [2024-06-07 23:24:51.750859] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:29.206 [2024-06-07 23:24:51.750866] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:29.206 [2024-06-07 23:24:51.754271] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:29.206 [2024-06-07 23:24:51.754295] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc4efd0 0 00:28:29.206 [2024-06-07 23:24:51.762252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:29.206 [2024-06-07 23:24:51.762262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:29.206 [2024-06-07 23:24:51.762266] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:29.206 [2024-06-07 23:24:51.762273] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:29.206 [2024-06-07 23:24:51.762303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.762308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.762312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.762323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:29.206 [2024-06-07 23:24:51.762338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.770255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.770264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.770267] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.770280] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:29.206 [2024-06-07 23:24:51.770285] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:29.206 [2024-06-07 23:24:51.770290] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:29.206 [2024-06-07 23:24:51.770303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770311] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.770318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.770330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.770540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.770546] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.770550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770553] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.770561] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:29.206 [2024-06-07 23:24:51.770568] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:29.206 [2024-06-07 23:24:51.770575] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.770588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.770598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.770777] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.770783] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.770787] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770790] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.770795] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:29.206 [2024-06-07 23:24:51.770803] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.770811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.770818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.770825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.770835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.771042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.771048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.771052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.771060] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.771069] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.771083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.771092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.771159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.771165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.771169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771172] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.771177] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:29.206 [2024-06-07 23:24:51.771181] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.771189] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.771294] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:29.206 [2024-06-07 23:24:51.771298] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.771305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.771319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.771329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.771408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.771414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.771418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.771426] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:29.206 [2024-06-07 23:24:51.771437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.771451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.771460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.206 [2024-06-07 23:24:51.771678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.206 [2024-06-07 23:24:51.771684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.206 [2024-06-07 23:24:51.771688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.206 [2024-06-07 23:24:51.771696] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:29.206 [2024-06-07 23:24:51.771700] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:29.206 [2024-06-07 23:24:51.771707] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:29.206 [2024-06-07 23:24:51.771719] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:29.206 [2024-06-07 23:24:51.771726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771730] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.206 [2024-06-07 23:24:51.771733] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.206 [2024-06-07 23:24:51.771740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.206 [2024-06-07 23:24:51.771750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.207 [2024-06-07 23:24:51.772011] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.207 [2024-06-07 23:24:51.772017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.207 [2024-06-07 23:24:51.772021] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772024] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=4096, cccid=0 00:28:29.207 [2024-06-07 23:24:51.772029] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc180) on tqpair(0xc4efd0): expected_datao=0, payload_size=4096 00:28:29.207 [2024-06-07 23:24:51.772037] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772041] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.207 [2024-06-07 23:24:51.772179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.207 [2024-06-07 23:24:51.772182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.207 [2024-06-07 23:24:51.772193] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:29.207 [2024-06-07 23:24:51.772200] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:29.207 [2024-06-07 23:24:51.772204] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:29.207 [2024-06-07 23:24:51.772208] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:29.207 [2024-06-07 23:24:51.772214] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:29.207 [2024-06-07 23:24:51.772219] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772226] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:29.207 [2024-06-07 23:24:51.772262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.207 [2024-06-07 23:24:51.772460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.207 [2024-06-07 23:24:51.772467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.207 [2024-06-07 23:24:51.772470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772474] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc180) on tqpair=0xc4efd0 00:28:29.207 [2024-06-07 23:24:51.772480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.207 [2024-06-07 23:24:51.772499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.207 [2024-06-07 23:24:51.772518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.207 [2024-06-07 23:24:51.772536] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.207 [2024-06-07 23:24:51.772553] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772562] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772576] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.207 [2024-06-07 23:24:51.772595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc180, cid 0, qid 0 00:28:29.207 [2024-06-07 23:24:51.772600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc2e0, cid 1, qid 0 00:28:29.207 [2024-06-07 23:24:51.772605] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc440, cid 2, qid 0 00:28:29.207 [2024-06-07 23:24:51.772609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.207 [2024-06-07 23:24:51.772614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.207 [2024-06-07 23:24:51.772811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.207 [2024-06-07 23:24:51.772818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.207 [2024-06-07 23:24:51.772821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.207 [2024-06-07 23:24:51.772829] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:29.207 [2024-06-07 23:24:51.772834] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772842] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772848] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.772854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.772861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.772868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:29.207 [2024-06-07 23:24:51.772877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.207 [2024-06-07 23:24:51.773122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.207 [2024-06-07 23:24:51.773129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.207 [2024-06-07 23:24:51.773133] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.773136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.207 [2024-06-07 23:24:51.773185] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.773194] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.773201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.773205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.773208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.773214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.207 [2024-06-07 23:24:51.773224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.207 [2024-06-07 23:24:51.773440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.207 [2024-06-07 23:24:51.773447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.207 [2024-06-07 23:24:51.773450] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.773454] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=4096, cccid=4 00:28:29.207 [2024-06-07 23:24:51.773462] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc700) on tqpair(0xc4efd0): expected_datao=0, payload_size=4096 00:28:29.207 [2024-06-07 23:24:51.773490] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.773495] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.818253] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.207 [2024-06-07 23:24:51.818263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.207 [2024-06-07 23:24:51.818266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.818270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.207 [2024-06-07 23:24:51.818279] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:29.207 [2024-06-07 23:24:51.818289] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.818298] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:29.207 [2024-06-07 23:24:51.818304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.818308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.818312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.207 [2024-06-07 23:24:51.818319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.207 [2024-06-07 23:24:51.818330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.207 [2024-06-07 23:24:51.818509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.207 [2024-06-07 23:24:51.818516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.207 [2024-06-07 23:24:51.818519] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.207 [2024-06-07 23:24:51.818522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=4096, cccid=4 00:28:29.208 [2024-06-07 23:24:51.818527] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc700) on tqpair(0xc4efd0): expected_datao=0, payload_size=4096 00:28:29.208 [2024-06-07 23:24:51.818555] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.818559] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.208 [2024-06-07 23:24:51.860452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.208 [2024-06-07 23:24:51.860455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.208 [2024-06-07 23:24:51.860473] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:29.208 [2024-06-07 23:24:51.860482] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:29.208 [2024-06-07 23:24:51.860490] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.208 [2024-06-07 23:24:51.860503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.208 [2024-06-07 23:24:51.860515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.208 [2024-06-07 23:24:51.860731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.208 [2024-06-07 23:24:51.860738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.208 [2024-06-07 23:24:51.860741] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860745] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=4096, cccid=4 00:28:29.208 [2024-06-07 23:24:51.860749] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc700) on tqpair(0xc4efd0): expected_datao=0, payload_size=4096 00:28:29.208 [2024-06-07 23:24:51.860777] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.208 [2024-06-07 23:24:51.860781] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.906261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.906264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.906276] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906284] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906292] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906298] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906303] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906308] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:29.471 [2024-06-07 23:24:51.906313] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:29.471 [2024-06-07 23:24:51.906318] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:29.471 [2024-06-07 23:24:51.906332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.906346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.906352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906356] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.906365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.471 [2024-06-07 23:24:51.906379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.471 [2024-06-07 23:24:51.906384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc860, cid 5, qid 0 00:28:29.471 [2024-06-07 23:24:51.906483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.906489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.906492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.906505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.906511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.906514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc860) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.906526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.906540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.906550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc860, cid 5, qid 0 00:28:29.471 [2024-06-07 23:24:51.906749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.906755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.906759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906762] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc860) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.906771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.906784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.906793] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc860, cid 5, qid 0 00:28:29.471 [2024-06-07 23:24:51.906965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.906972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.906975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc860) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.906987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.906994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.907001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.907010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc860, cid 5, qid 0 00:28:29.471 [2024-06-07 23:24:51.907187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.471 [2024-06-07 23:24:51.907194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.471 [2024-06-07 23:24:51.907197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.907201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc860) on tqpair=0xc4efd0 00:28:29.471 [2024-06-07 23:24:51.907211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.907215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.907218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.907225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.907232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.907235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.471 [2024-06-07 23:24:51.907240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc4efd0) 00:28:29.471 [2024-06-07 23:24:51.907251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.471 [2024-06-07 23:24:51.907258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc4efd0) 00:28:29.472 [2024-06-07 23:24:51.907271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.472 [2024-06-07 23:24:51.907278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc4efd0) 00:28:29.472 [2024-06-07 23:24:51.907291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.472 [2024-06-07 23:24:51.907302] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc860, cid 5, qid 0 00:28:29.472 [2024-06-07 23:24:51.907307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc700, cid 4, qid 0 00:28:29.472 [2024-06-07 23:24:51.907312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc9c0, cid 6, qid 0 00:28:29.472 [2024-06-07 23:24:51.907316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbcb20, cid 7, qid 0 00:28:29.472 [2024-06-07 23:24:51.907581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.472 [2024-06-07 23:24:51.907587] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.472 [2024-06-07 23:24:51.907591] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907594] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=8192, cccid=5 00:28:29.472 [2024-06-07 23:24:51.907598] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc860) on tqpair(0xc4efd0): expected_datao=0, payload_size=8192 00:28:29.472 [2024-06-07 23:24:51.907679] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907683] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.472 [2024-06-07 23:24:51.907694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.472 [2024-06-07 23:24:51.907698] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907701] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=512, cccid=4 00:28:29.472 [2024-06-07 23:24:51.907705] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc700) on tqpair(0xc4efd0): expected_datao=0, payload_size=512 00:28:29.472 [2024-06-07 23:24:51.907712] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907716] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.472 [2024-06-07 23:24:51.907727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.472 [2024-06-07 23:24:51.907730] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907734] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=512, cccid=6 00:28:29.472 [2024-06-07 23:24:51.907738] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbc9c0) on tqpair(0xc4efd0): expected_datao=0, payload_size=512 00:28:29.472 [2024-06-07 23:24:51.907745] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907750] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:29.472 [2024-06-07 23:24:51.907761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:29.472 [2024-06-07 23:24:51.907765] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907768] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc4efd0): datao=0, datal=4096, cccid=7 00:28:29.472 [2024-06-07 23:24:51.907772] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcbcb20) on tqpair(0xc4efd0): expected_datao=0, payload_size=4096 00:28:29.472 [2024-06-07 23:24:51.907780] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907783] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.472 [2024-06-07 23:24:51.907824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.472 [2024-06-07 23:24:51.907827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc860) on tqpair=0xc4efd0 00:28:29.472 [2024-06-07 23:24:51.907843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.472 [2024-06-07 23:24:51.907849] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.472 [2024-06-07 23:24:51.907853] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc700) on tqpair=0xc4efd0 00:28:29.472 [2024-06-07 23:24:51.907864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.472 [2024-06-07 23:24:51.907870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.472 [2024-06-07 23:24:51.907873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc9c0) on tqpair=0xc4efd0 00:28:29.472 [2024-06-07 23:24:51.907884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.472 [2024-06-07 23:24:51.907890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.472 [2024-06-07 23:24:51.907893] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.472 [2024-06-07 23:24:51.907896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbcb20) on tqpair=0xc4efd0 00:28:29.472 ===================================================== 00:28:29.472 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.472 ===================================================== 00:28:29.472 Controller Capabilities/Features 00:28:29.472 ================================ 00:28:29.472 Vendor ID: 8086 00:28:29.472 Subsystem Vendor ID: 8086 00:28:29.472 Serial Number: SPDK00000000000001 00:28:29.472 Model Number: SPDK bdev Controller 00:28:29.472 Firmware Version: 24.01.1 00:28:29.472 Recommended Arb Burst: 6 00:28:29.472 IEEE OUI Identifier: e4 d2 5c 00:28:29.472 Multi-path I/O 00:28:29.472 May have multiple subsystem ports: Yes 00:28:29.472 May have multiple controllers: Yes 00:28:29.472 Associated with SR-IOV VF: No 00:28:29.472 Max Data Transfer Size: 131072 00:28:29.472 Max Number of Namespaces: 32 00:28:29.472 Max Number of I/O Queues: 127 00:28:29.472 NVMe Specification Version (VS): 1.3 00:28:29.472 NVMe Specification Version (Identify): 1.3 00:28:29.472 Maximum Queue Entries: 128 00:28:29.472 Contiguous Queues Required: Yes 00:28:29.472 Arbitration Mechanisms Supported 00:28:29.472 Weighted Round Robin: Not Supported 00:28:29.472 Vendor Specific: Not Supported 00:28:29.472 Reset Timeout: 15000 ms 00:28:29.472 Doorbell Stride: 4 bytes 00:28:29.472 NVM Subsystem Reset: Not Supported 00:28:29.472 Command Sets Supported 00:28:29.472 NVM Command Set: Supported 00:28:29.472 Boot Partition: Not Supported 00:28:29.472 Memory Page Size Minimum: 4096 bytes 00:28:29.472 Memory Page Size Maximum: 4096 bytes 00:28:29.472 Persistent Memory Region: Not Supported 00:28:29.472 Optional Asynchronous Events Supported 00:28:29.472 Namespace Attribute Notices: Supported 00:28:29.472 Firmware Activation Notices: Not Supported 00:28:29.472 ANA Change Notices: Not Supported 00:28:29.472 PLE Aggregate Log Change Notices: Not Supported 00:28:29.472 LBA Status Info Alert Notices: Not Supported 00:28:29.472 EGE Aggregate Log Change Notices: Not Supported 00:28:29.472 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.472 Zone Descriptor Change Notices: Not Supported 00:28:29.472 Discovery Log Change Notices: Not Supported 00:28:29.472 Controller Attributes 00:28:29.472 128-bit Host Identifier: Supported 00:28:29.472 Non-Operational Permissive Mode: Not Supported 00:28:29.472 NVM Sets: Not Supported 00:28:29.472 Read Recovery Levels: Not Supported 00:28:29.472 Endurance Groups: Not Supported 00:28:29.472 Predictable Latency Mode: Not Supported 00:28:29.472 Traffic Based Keep ALive: Not Supported 00:28:29.472 Namespace Granularity: Not Supported 00:28:29.472 SQ Associations: Not Supported 00:28:29.472 UUID List: Not Supported 00:28:29.472 Multi-Domain Subsystem: Not Supported 00:28:29.472 Fixed Capacity Management: Not Supported 00:28:29.472 Variable Capacity Management: Not Supported 00:28:29.472 Delete Endurance Group: Not Supported 00:28:29.472 Delete NVM Set: Not Supported 00:28:29.472 Extended LBA Formats Supported: Not Supported 00:28:29.472 Flexible Data Placement Supported: Not Supported 00:28:29.472 00:28:29.472 Controller Memory Buffer Support 00:28:29.472 ================================ 00:28:29.472 Supported: No 00:28:29.472 00:28:29.472 Persistent Memory Region Support 00:28:29.472 ================================ 00:28:29.472 Supported: No 00:28:29.472 00:28:29.472 Admin Command Set Attributes 00:28:29.472 ============================ 00:28:29.472 Security Send/Receive: Not Supported 00:28:29.472 Format NVM: Not Supported 00:28:29.472 Firmware Activate/Download: Not Supported 00:28:29.472 Namespace Management: Not Supported 00:28:29.472 Device Self-Test: Not Supported 00:28:29.472 Directives: Not Supported 00:28:29.472 NVMe-MI: Not Supported 00:28:29.472 Virtualization Management: Not Supported 00:28:29.472 Doorbell Buffer Config: Not Supported 00:28:29.472 Get LBA Status Capability: Not Supported 00:28:29.472 Command & Feature Lockdown Capability: Not Supported 00:28:29.472 Abort Command Limit: 4 00:28:29.472 Async Event Request Limit: 4 00:28:29.472 Number of Firmware Slots: N/A 00:28:29.472 Firmware Slot 1 Read-Only: N/A 00:28:29.472 Firmware Activation Without Reset: N/A 00:28:29.472 Multiple Update Detection Support: N/A 00:28:29.472 Firmware Update Granularity: No Information Provided 00:28:29.472 Per-Namespace SMART Log: No 00:28:29.472 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.472 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:29.473 Command Effects Log Page: Supported 00:28:29.473 Get Log Page Extended Data: Supported 00:28:29.473 Telemetry Log Pages: Not Supported 00:28:29.473 Persistent Event Log Pages: Not Supported 00:28:29.473 Supported Log Pages Log Page: May Support 00:28:29.473 Commands Supported & Effects Log Page: Not Supported 00:28:29.473 Feature Identifiers & Effects Log Page:May Support 00:28:29.473 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.473 Data Area 4 for Telemetry Log: Not Supported 00:28:29.473 Error Log Page Entries Supported: 128 00:28:29.473 Keep Alive: Supported 00:28:29.473 Keep Alive Granularity: 10000 ms 00:28:29.473 00:28:29.473 NVM Command Set Attributes 00:28:29.473 ========================== 00:28:29.473 Submission Queue Entry Size 00:28:29.473 Max: 64 00:28:29.473 Min: 64 00:28:29.473 Completion Queue Entry Size 00:28:29.473 Max: 16 00:28:29.473 Min: 16 00:28:29.473 Number of Namespaces: 32 00:28:29.473 Compare Command: Supported 00:28:29.473 Write Uncorrectable Command: Not Supported 00:28:29.473 Dataset Management Command: Supported 00:28:29.473 Write Zeroes Command: Supported 00:28:29.473 Set Features Save Field: Not Supported 00:28:29.473 Reservations: Supported 00:28:29.473 Timestamp: Not Supported 00:28:29.473 Copy: Supported 00:28:29.473 Volatile Write Cache: Present 00:28:29.473 Atomic Write Unit (Normal): 1 00:28:29.473 Atomic Write Unit (PFail): 1 00:28:29.473 Atomic Compare & Write Unit: 1 00:28:29.473 Fused Compare & Write: Supported 00:28:29.473 Scatter-Gather List 00:28:29.473 SGL Command Set: Supported 00:28:29.473 SGL Keyed: Supported 00:28:29.473 SGL Bit Bucket Descriptor: Not Supported 00:28:29.473 SGL Metadata Pointer: Not Supported 00:28:29.473 Oversized SGL: Not Supported 00:28:29.473 SGL Metadata Address: Not Supported 00:28:29.473 SGL Offset: Supported 00:28:29.473 Transport SGL Data Block: Not Supported 00:28:29.473 Replay Protected Memory Block: Not Supported 00:28:29.473 00:28:29.473 Firmware Slot Information 00:28:29.473 ========================= 00:28:29.473 Active slot: 1 00:28:29.473 Slot 1 Firmware Revision: 24.01.1 00:28:29.473 00:28:29.473 00:28:29.473 Commands Supported and Effects 00:28:29.473 ============================== 00:28:29.473 Admin Commands 00:28:29.473 -------------- 00:28:29.473 Get Log Page (02h): Supported 00:28:29.473 Identify (06h): Supported 00:28:29.473 Abort (08h): Supported 00:28:29.473 Set Features (09h): Supported 00:28:29.473 Get Features (0Ah): Supported 00:28:29.473 Asynchronous Event Request (0Ch): Supported 00:28:29.473 Keep Alive (18h): Supported 00:28:29.473 I/O Commands 00:28:29.473 ------------ 00:28:29.473 Flush (00h): Supported LBA-Change 00:28:29.473 Write (01h): Supported LBA-Change 00:28:29.473 Read (02h): Supported 00:28:29.473 Compare (05h): Supported 00:28:29.473 Write Zeroes (08h): Supported LBA-Change 00:28:29.473 Dataset Management (09h): Supported LBA-Change 00:28:29.473 Copy (19h): Supported LBA-Change 00:28:29.473 Unknown (79h): Supported LBA-Change 00:28:29.473 Unknown (7Ah): Supported 00:28:29.473 00:28:29.473 Error Log 00:28:29.473 ========= 00:28:29.473 00:28:29.473 Arbitration 00:28:29.473 =========== 00:28:29.473 Arbitration Burst: 1 00:28:29.473 00:28:29.473 Power Management 00:28:29.473 ================ 00:28:29.473 Number of Power States: 1 00:28:29.473 Current Power State: Power State #0 00:28:29.473 Power State #0: 00:28:29.473 Max Power: 0.00 W 00:28:29.473 Non-Operational State: Operational 00:28:29.473 Entry Latency: Not Reported 00:28:29.473 Exit Latency: Not Reported 00:28:29.473 Relative Read Throughput: 0 00:28:29.473 Relative Read Latency: 0 00:28:29.473 Relative Write Throughput: 0 00:28:29.473 Relative Write Latency: 0 00:28:29.473 Idle Power: Not Reported 00:28:29.473 Active Power: Not Reported 00:28:29.473 Non-Operational Permissive Mode: Not Supported 00:28:29.473 00:28:29.473 Health Information 00:28:29.473 ================== 00:28:29.473 Critical Warnings: 00:28:29.473 Available Spare Space: OK 00:28:29.473 Temperature: OK 00:28:29.473 Device Reliability: OK 00:28:29.473 Read Only: No 00:28:29.473 Volatile Memory Backup: OK 00:28:29.473 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:29.473 Temperature Threshold: [2024-06-07 23:24:51.908000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc4efd0) 00:28:29.473 [2024-06-07 23:24:51.908015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.473 [2024-06-07 23:24:51.908026] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbcb20, cid 7, qid 0 00:28:29.473 [2024-06-07 23:24:51.908216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.473 [2024-06-07 23:24:51.908223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.473 [2024-06-07 23:24:51.908226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbcb20) on tqpair=0xc4efd0 00:28:29.473 [2024-06-07 23:24:51.908266] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:29.473 [2024-06-07 23:24:51.908277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.473 [2024-06-07 23:24:51.908283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.473 [2024-06-07 23:24:51.908289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.473 [2024-06-07 23:24:51.908297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.473 [2024-06-07 23:24:51.908305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.473 [2024-06-07 23:24:51.908319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.473 [2024-06-07 23:24:51.908330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.473 [2024-06-07 23:24:51.908546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.473 [2024-06-07 23:24:51.908553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.473 [2024-06-07 23:24:51.908556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.473 [2024-06-07 23:24:51.908566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.473 [2024-06-07 23:24:51.908580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.473 [2024-06-07 23:24:51.908592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.473 [2024-06-07 23:24:51.908809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.473 [2024-06-07 23:24:51.908816] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.473 [2024-06-07 23:24:51.908819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.473 [2024-06-07 23:24:51.908827] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:29.473 [2024-06-07 23:24:51.908831] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:29.473 [2024-06-07 23:24:51.908841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.908848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.473 [2024-06-07 23:24:51.908854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.473 [2024-06-07 23:24:51.908864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.473 [2024-06-07 23:24:51.909033] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.473 [2024-06-07 23:24:51.909040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.473 [2024-06-07 23:24:51.909043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.909047] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.473 [2024-06-07 23:24:51.909056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.909060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.909063] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.473 [2024-06-07 23:24:51.909070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.473 [2024-06-07 23:24:51.909079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.473 [2024-06-07 23:24:51.909303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.473 [2024-06-07 23:24:51.909312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.473 [2024-06-07 23:24:51.909315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.473 [2024-06-07 23:24:51.909319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.909328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.474 [2024-06-07 23:24:51.909342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.474 [2024-06-07 23:24:51.909352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.474 [2024-06-07 23:24:51.909507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.474 [2024-06-07 23:24:51.909513] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.474 [2024-06-07 23:24:51.909516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.909529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.474 [2024-06-07 23:24:51.909543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.474 [2024-06-07 23:24:51.909552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.474 [2024-06-07 23:24:51.909730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.474 [2024-06-07 23:24:51.909736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.474 [2024-06-07 23:24:51.909739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.909752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.474 [2024-06-07 23:24:51.909765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.474 [2024-06-07 23:24:51.909775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.474 [2024-06-07 23:24:51.909964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.474 [2024-06-07 23:24:51.909970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.474 [2024-06-07 23:24:51.909974] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.909986] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.909994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.474 [2024-06-07 23:24:51.910000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.474 [2024-06-07 23:24:51.910010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.474 [2024-06-07 23:24:51.910176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.474 [2024-06-07 23:24:51.910182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.474 [2024-06-07 23:24:51.910187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.910191] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.910200] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.910204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.910207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc4efd0) 00:28:29.474 [2024-06-07 23:24:51.910214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.474 [2024-06-07 23:24:51.910223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcbc5a0, cid 3, qid 0 00:28:29.474 [2024-06-07 23:24:51.914252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:29.474 [2024-06-07 23:24:51.914260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:29.474 [2024-06-07 23:24:51.914264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:29.474 [2024-06-07 23:24:51.914268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xcbc5a0) on tqpair=0xc4efd0 00:28:29.474 [2024-06-07 23:24:51.914275] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:29.474 0 Kelvin (-273 Celsius) 00:28:29.474 Available Spare: 0% 00:28:29.474 Available Spare Threshold: 0% 00:28:29.474 Life Percentage Used: 0% 00:28:29.474 Data Units Read: 0 00:28:29.474 Data Units Written: 0 00:28:29.474 Host Read Commands: 0 00:28:29.474 Host Write Commands: 0 00:28:29.474 Controller Busy Time: 0 minutes 00:28:29.474 Power Cycles: 0 00:28:29.474 Power On Hours: 0 hours 00:28:29.474 Unsafe Shutdowns: 0 00:28:29.474 Unrecoverable Media Errors: 0 00:28:29.474 Lifetime Error Log Entries: 0 00:28:29.474 Warning Temperature Time: 0 minutes 00:28:29.474 Critical Temperature Time: 0 minutes 00:28:29.474 00:28:29.474 Number of Queues 00:28:29.474 ================ 00:28:29.474 Number of I/O Submission Queues: 127 00:28:29.474 Number of I/O Completion Queues: 127 00:28:29.474 00:28:29.474 Active Namespaces 00:28:29.474 ================= 00:28:29.474 Namespace ID:1 00:28:29.474 Error Recovery Timeout: Unlimited 00:28:29.474 Command Set Identifier: NVM (00h) 00:28:29.474 Deallocate: Supported 00:28:29.474 Deallocated/Unwritten Error: Not Supported 00:28:29.474 Deallocated Read Value: Unknown 00:28:29.474 Deallocate in Write Zeroes: Not Supported 00:28:29.474 Deallocated Guard Field: 0xFFFF 00:28:29.474 Flush: Supported 00:28:29.474 Reservation: Supported 00:28:29.474 Namespace Sharing Capabilities: Multiple Controllers 00:28:29.474 Size (in LBAs): 131072 (0GiB) 00:28:29.474 Capacity (in LBAs): 131072 (0GiB) 00:28:29.474 Utilization (in LBAs): 131072 (0GiB) 00:28:29.474 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:29.474 EUI64: ABCDEF0123456789 00:28:29.474 UUID: 97884c9a-3afc-44cc-8236-9ea70d9b98dd 00:28:29.474 Thin Provisioning: Not Supported 00:28:29.474 Per-NS Atomic Units: Yes 00:28:29.474 Atomic Boundary Size (Normal): 0 00:28:29.474 Atomic Boundary Size (PFail): 0 00:28:29.474 Atomic Boundary Offset: 0 00:28:29.474 Maximum Single Source Range Length: 65535 00:28:29.474 Maximum Copy Length: 65535 00:28:29.474 Maximum Source Range Count: 1 00:28:29.474 NGUID/EUI64 Never Reused: No 00:28:29.474 Namespace Write Protected: No 00:28:29.474 Number of LBA Formats: 1 00:28:29.474 Current LBA Format: LBA Format #00 00:28:29.474 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.474 00:28:29.474 23:24:51 -- host/identify.sh@51 -- # sync 00:28:29.474 23:24:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.474 23:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.474 23:24:51 -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 23:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.474 23:24:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:29.474 23:24:51 -- host/identify.sh@56 -- # nvmftestfini 00:28:29.474 23:24:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:29.474 23:24:51 -- nvmf/common.sh@116 -- # sync 00:28:29.474 23:24:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:29.474 23:24:51 -- nvmf/common.sh@119 -- # set +e 00:28:29.474 23:24:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:29.474 23:24:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:29.474 rmmod nvme_tcp 00:28:29.474 rmmod nvme_fabrics 00:28:29.474 rmmod nvme_keyring 00:28:29.474 23:24:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:29.474 23:24:52 -- nvmf/common.sh@123 -- # set -e 00:28:29.474 23:24:52 -- nvmf/common.sh@124 -- # return 0 00:28:29.474 23:24:52 -- nvmf/common.sh@477 -- # '[' -n 2980906 ']' 00:28:29.474 23:24:52 -- nvmf/common.sh@478 -- # killprocess 2980906 00:28:29.474 23:24:52 -- common/autotest_common.sh@926 -- # '[' -z 2980906 ']' 00:28:29.474 23:24:52 -- common/autotest_common.sh@930 -- # kill -0 2980906 00:28:29.474 23:24:52 -- common/autotest_common.sh@931 -- # uname 00:28:29.474 23:24:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:29.474 23:24:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2980906 00:28:29.474 23:24:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:29.474 23:24:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:29.474 23:24:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2980906' 00:28:29.474 killing process with pid 2980906 00:28:29.474 23:24:52 -- common/autotest_common.sh@945 -- # kill 2980906 00:28:29.474 [2024-06-07 23:24:52.073191] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:29.474 23:24:52 -- common/autotest_common.sh@950 -- # wait 2980906 00:28:29.736 23:24:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:29.736 23:24:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:29.736 23:24:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:29.736 23:24:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.736 23:24:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:29.736 23:24:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.736 23:24:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.736 23:24:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.645 23:24:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:31.645 00:28:31.645 real 0m11.012s 00:28:31.645 user 0m8.210s 00:28:31.645 sys 0m5.637s 00:28:31.645 23:24:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.645 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.645 ************************************ 00:28:31.645 END TEST nvmf_identify 00:28:31.645 ************************************ 00:28:31.645 23:24:54 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:31.645 23:24:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:31.645 23:24:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.645 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.906 ************************************ 00:28:31.906 START TEST nvmf_perf 00:28:31.906 ************************************ 00:28:31.906 23:24:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:31.906 * Looking for test storage... 00:28:31.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.906 23:24:54 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.906 23:24:54 -- nvmf/common.sh@7 -- # uname -s 00:28:31.906 23:24:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.906 23:24:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.906 23:24:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.906 23:24:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.906 23:24:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.906 23:24:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.906 23:24:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.906 23:24:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.906 23:24:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.906 23:24:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.906 23:24:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:31.906 23:24:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:31.906 23:24:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.906 23:24:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.906 23:24:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.906 23:24:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.906 23:24:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.906 23:24:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.906 23:24:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.906 23:24:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.906 23:24:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.906 23:24:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.906 23:24:54 -- paths/export.sh@5 -- # export PATH 00:28:31.906 23:24:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.906 23:24:54 -- nvmf/common.sh@46 -- # : 0 00:28:31.906 23:24:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.906 23:24:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.906 23:24:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.906 23:24:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.906 23:24:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.906 23:24:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.906 23:24:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.906 23:24:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.906 23:24:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:31.906 23:24:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:31.906 23:24:54 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:31.906 23:24:54 -- host/perf.sh@17 -- # nvmftestinit 00:28:31.906 23:24:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:31.906 23:24:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.906 23:24:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:31.906 23:24:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:31.906 23:24:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:31.906 23:24:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.906 23:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.906 23:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.906 23:24:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:31.906 23:24:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:31.906 23:24:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:31.906 23:24:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.494 23:25:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:38.494 23:25:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:38.494 23:25:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:38.494 23:25:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:38.494 23:25:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:38.494 23:25:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:38.494 23:25:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:38.494 23:25:01 -- nvmf/common.sh@294 -- # net_devs=() 00:28:38.494 23:25:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:38.494 23:25:01 -- nvmf/common.sh@295 -- # e810=() 00:28:38.494 23:25:01 -- nvmf/common.sh@295 -- # local -ga e810 00:28:38.494 23:25:01 -- nvmf/common.sh@296 -- # x722=() 00:28:38.494 23:25:01 -- nvmf/common.sh@296 -- # local -ga x722 00:28:38.494 23:25:01 -- nvmf/common.sh@297 -- # mlx=() 00:28:38.494 23:25:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:38.494 23:25:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.494 23:25:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.755 23:25:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:38.755 23:25:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:38.755 23:25:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:38.755 23:25:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:38.755 23:25:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:38.755 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:38.755 23:25:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:38.755 23:25:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:38.755 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:38.755 23:25:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:38.755 23:25:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:38.755 23:25:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:38.755 23:25:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.755 23:25:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:38.755 23:25:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.755 23:25:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:38.755 Found net devices under 0000:31:00.0: cvl_0_0 00:28:38.755 23:25:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.755 23:25:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:38.755 23:25:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.755 23:25:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:38.756 23:25:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.756 23:25:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:38.756 Found net devices under 0000:31:00.1: cvl_0_1 00:28:38.756 23:25:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.756 23:25:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:38.756 23:25:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:38.756 23:25:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:38.756 23:25:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:38.756 23:25:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:38.756 23:25:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.756 23:25:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.756 23:25:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.756 23:25:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:38.756 23:25:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.756 23:25:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.756 23:25:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:38.756 23:25:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.756 23:25:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.756 23:25:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:38.756 23:25:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:38.756 23:25:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.756 23:25:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.756 23:25:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.756 23:25:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.756 23:25:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:38.756 23:25:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.016 23:25:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.016 23:25:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.016 23:25:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:39.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:28:39.016 00:28:39.016 --- 10.0.0.2 ping statistics --- 00:28:39.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.016 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:28:39.016 23:25:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:28:39.016 00:28:39.016 --- 10.0.0.1 ping statistics --- 00:28:39.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.016 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:39.016 23:25:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.016 23:25:01 -- nvmf/common.sh@410 -- # return 0 00:28:39.016 23:25:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:39.016 23:25:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.016 23:25:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:39.016 23:25:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:39.016 23:25:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.016 23:25:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:39.016 23:25:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:39.016 23:25:01 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:39.016 23:25:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:39.016 23:25:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:39.016 23:25:01 -- common/autotest_common.sh@10 -- # set +x 00:28:39.016 23:25:01 -- nvmf/common.sh@469 -- # nvmfpid=2985393 00:28:39.016 23:25:01 -- nvmf/common.sh@470 -- # waitforlisten 2985393 00:28:39.016 23:25:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.016 23:25:01 -- common/autotest_common.sh@819 -- # '[' -z 2985393 ']' 00:28:39.016 23:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.016 23:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:39.016 23:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.016 23:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:39.016 23:25:01 -- common/autotest_common.sh@10 -- # set +x 00:28:39.016 [2024-06-07 23:25:01.579446] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:39.016 [2024-06-07 23:25:01.579498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.016 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.016 [2024-06-07 23:25:01.648885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.016 [2024-06-07 23:25:01.679405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:39.016 [2024-06-07 23:25:01.679556] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.016 [2024-06-07 23:25:01.679566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.016 [2024-06-07 23:25:01.679574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.016 [2024-06-07 23:25:01.679779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.016 [2024-06-07 23:25:01.683256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.016 [2024-06-07 23:25:01.683358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.016 [2024-06-07 23:25:01.683524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.959 23:25:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:39.959 23:25:02 -- common/autotest_common.sh@852 -- # return 0 00:28:39.959 23:25:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:39.959 23:25:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:39.959 23:25:02 -- common/autotest_common.sh@10 -- # set +x 00:28:39.959 23:25:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.959 23:25:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:39.959 23:25:02 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:40.219 23:25:02 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:40.219 23:25:02 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:40.479 23:25:03 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:40.479 23:25:03 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:40.741 23:25:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:40.741 23:25:03 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:40.741 23:25:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:40.741 23:25:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:40.741 23:25:03 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:40.741 [2024-06-07 23:25:03.336560] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.741 23:25:03 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.001 23:25:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:41.001 23:25:03 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.262 23:25:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:41.262 23:25:03 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:41.262 23:25:03 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.521 [2024-06-07 23:25:03.991129] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.521 23:25:04 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:41.521 23:25:04 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:41.521 23:25:04 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:41.521 23:25:04 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:41.521 23:25:04 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:42.904 Initializing NVMe Controllers 00:28:42.904 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:42.904 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:42.904 Initialization complete. Launching workers. 00:28:42.904 ======================================================== 00:28:42.904 Latency(us) 00:28:42.904 Device Information : IOPS MiB/s Average min max 00:28:42.904 PCIE (0000:65:00.0) NSID 1 from core 0: 81163.59 317.05 393.55 13.23 5255.06 00:28:42.904 ======================================================== 00:28:42.904 Total : 81163.59 317.05 393.55 13.23 5255.06 00:28:42.904 00:28:42.904 23:25:05 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.904 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.287 Initializing NVMe Controllers 00:28:44.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:44.288 Initialization complete. Launching workers. 00:28:44.288 ======================================================== 00:28:44.288 Latency(us) 00:28:44.288 Device Information : IOPS MiB/s Average min max 00:28:44.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10103.15 301.14 45042.96 00:28:44.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16485.41 7961.67 47902.18 00:28:44.288 ======================================================== 00:28:44.288 Total : 160.00 0.62 12536.38 301.14 47902.18 00:28:44.288 00:28:44.288 23:25:06 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.673 Initializing NVMe Controllers 00:28:45.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.673 Initialization complete. Launching workers. 00:28:45.673 ======================================================== 00:28:45.673 Latency(us) 00:28:45.673 Device Information : IOPS MiB/s Average min max 00:28:45.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10369.75 40.51 3087.80 382.46 8281.81 00:28:45.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3872.10 15.13 8275.72 5165.04 17546.41 00:28:45.673 ======================================================== 00:28:45.673 Total : 14241.85 55.63 4498.30 382.46 17546.41 00:28:45.673 00:28:45.673 23:25:08 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:45.673 23:25:08 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:45.673 23:25:08 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.673 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.217 Initializing NVMe Controllers 00:28:48.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.218 Controller IO queue size 128, less than required. 00:28:48.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.218 Controller IO queue size 128, less than required. 00:28:48.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:48.218 Initialization complete. Launching workers. 00:28:48.218 ======================================================== 00:28:48.218 Latency(us) 00:28:48.218 Device Information : IOPS MiB/s Average min max 00:28:48.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1111.85 277.96 118602.98 62786.07 175762.37 00:28:48.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.42 144.36 228635.04 67182.47 353547.23 00:28:48.218 ======================================================== 00:28:48.218 Total : 1689.27 422.32 156213.82 62786.07 353547.23 00:28:48.218 00:28:48.218 23:25:10 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:48.218 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.218 No valid NVMe controllers or AIO or URING devices found 00:28:48.218 Initializing NVMe Controllers 00:28:48.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.218 Controller IO queue size 128, less than required. 00:28:48.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.218 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:48.218 Controller IO queue size 128, less than required. 00:28:48.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.218 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:48.218 WARNING: Some requested NVMe devices were skipped 00:28:48.218 23:25:10 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:48.218 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.763 Initializing NVMe Controllers 00:28:50.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.763 Controller IO queue size 128, less than required. 00:28:50.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.763 Controller IO queue size 128, less than required. 00:28:50.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.763 Initialization complete. Launching workers. 00:28:50.763 00:28:50.763 ==================== 00:28:50.763 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:50.763 TCP transport: 00:28:50.763 polls: 26898 00:28:50.763 idle_polls: 9715 00:28:50.763 sock_completions: 17183 00:28:50.763 nvme_completions: 4471 00:28:50.763 submitted_requests: 6913 00:28:50.763 queued_requests: 1 00:28:50.763 00:28:50.763 ==================== 00:28:50.763 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:50.763 TCP transport: 00:28:50.763 polls: 27095 00:28:50.763 idle_polls: 9764 00:28:50.763 sock_completions: 17331 00:28:50.763 nvme_completions: 4526 00:28:50.763 submitted_requests: 7010 00:28:50.763 queued_requests: 1 00:28:50.763 ======================================================== 00:28:50.763 Latency(us) 00:28:50.763 Device Information : IOPS MiB/s Average min max 00:28:50.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.71 295.18 111479.00 51301.16 180850.66 00:28:50.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1194.21 298.55 109734.60 55814.98 171784.86 00:28:50.763 ======================================================== 00:28:50.763 Total : 2374.92 593.73 110601.85 51301.16 180850.66 00:28:50.763 00:28:50.763 23:25:13 -- host/perf.sh@66 -- # sync 00:28:50.763 23:25:13 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.763 23:25:13 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:50.763 23:25:13 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:28:50.763 23:25:13 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:52.147 23:25:14 -- host/perf.sh@72 -- # ls_guid=a550250f-6ebb-4e95-8bc6-14de7adbab58 00:28:52.147 23:25:14 -- host/perf.sh@73 -- # get_lvs_free_mb a550250f-6ebb-4e95-8bc6-14de7adbab58 00:28:52.147 23:25:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a550250f-6ebb-4e95-8bc6-14de7adbab58 00:28:52.147 23:25:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:52.147 23:25:14 -- common/autotest_common.sh@1345 -- # local fc 00:28:52.147 23:25:14 -- common/autotest_common.sh@1346 -- # local cs 00:28:52.147 23:25:14 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:52.147 23:25:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:52.147 { 00:28:52.147 "uuid": "a550250f-6ebb-4e95-8bc6-14de7adbab58", 00:28:52.147 "name": "lvs_0", 00:28:52.147 "base_bdev": "Nvme0n1", 00:28:52.147 "total_data_clusters": 457407, 00:28:52.147 "free_clusters": 457407, 00:28:52.147 "block_size": 512, 00:28:52.147 "cluster_size": 4194304 00:28:52.147 } 00:28:52.147 ]' 00:28:52.147 23:25:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a550250f-6ebb-4e95-8bc6-14de7adbab58") .free_clusters' 00:28:52.147 23:25:14 -- common/autotest_common.sh@1348 -- # fc=457407 00:28:52.147 23:25:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a550250f-6ebb-4e95-8bc6-14de7adbab58") .cluster_size' 00:28:52.147 23:25:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:52.147 23:25:14 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:28:52.147 23:25:14 -- common/autotest_common.sh@1353 -- # echo 1829628 00:28:52.147 1829628 00:28:52.147 23:25:14 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:28:52.147 23:25:14 -- host/perf.sh@78 -- # free_mb=20480 00:28:52.147 23:25:14 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a550250f-6ebb-4e95-8bc6-14de7adbab58 lbd_0 20480 00:28:52.408 23:25:14 -- host/perf.sh@80 -- # lb_guid=cbe7ac2f-6f58-4ff1-abed-d01ed7611bb8 00:28:52.408 23:25:14 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore cbe7ac2f-6f58-4ff1-abed-d01ed7611bb8 lvs_n_0 00:28:54.323 23:25:16 -- host/perf.sh@83 -- # ls_nested_guid=c129fc59-1cd7-42e5-90c1-b2c7c0227ef1 00:28:54.323 23:25:16 -- host/perf.sh@84 -- # get_lvs_free_mb c129fc59-1cd7-42e5-90c1-b2c7c0227ef1 00:28:54.323 23:25:16 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c129fc59-1cd7-42e5-90c1-b2c7c0227ef1 00:28:54.323 23:25:16 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:54.323 23:25:16 -- common/autotest_common.sh@1345 -- # local fc 00:28:54.323 23:25:16 -- common/autotest_common.sh@1346 -- # local cs 00:28:54.323 23:25:16 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:54.323 23:25:16 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:54.323 { 00:28:54.323 "uuid": "a550250f-6ebb-4e95-8bc6-14de7adbab58", 00:28:54.323 "name": "lvs_0", 00:28:54.323 "base_bdev": "Nvme0n1", 00:28:54.323 "total_data_clusters": 457407, 00:28:54.323 "free_clusters": 452287, 00:28:54.323 "block_size": 512, 00:28:54.323 "cluster_size": 4194304 00:28:54.323 }, 00:28:54.323 { 00:28:54.323 "uuid": "c129fc59-1cd7-42e5-90c1-b2c7c0227ef1", 00:28:54.323 "name": "lvs_n_0", 00:28:54.323 "base_bdev": "cbe7ac2f-6f58-4ff1-abed-d01ed7611bb8", 00:28:54.323 "total_data_clusters": 5114, 00:28:54.323 "free_clusters": 5114, 00:28:54.323 "block_size": 512, 00:28:54.323 "cluster_size": 4194304 00:28:54.323 } 00:28:54.323 ]' 00:28:54.323 23:25:16 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c129fc59-1cd7-42e5-90c1-b2c7c0227ef1") .free_clusters' 00:28:54.323 23:25:16 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:54.323 23:25:16 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c129fc59-1cd7-42e5-90c1-b2c7c0227ef1") .cluster_size' 00:28:54.323 23:25:16 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:54.323 23:25:16 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:54.323 23:25:16 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:54.323 20456 00:28:54.323 23:25:16 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:54.323 23:25:16 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c129fc59-1cd7-42e5-90c1-b2c7c0227ef1 lbd_nest_0 20456 00:28:54.323 23:25:16 -- host/perf.sh@88 -- # lb_nested_guid=44e03a0a-cfbd-4fad-83dc-78659f8ef1db 00:28:54.323 23:25:16 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.584 23:25:17 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:54.584 23:25:17 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 44e03a0a-cfbd-4fad-83dc-78659f8ef1db 00:28:54.584 23:25:17 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.848 23:25:17 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:54.848 23:25:17 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:54.848 23:25:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:54.848 23:25:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:54.848 23:25:17 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.132 Initializing NVMe Controllers 00:29:07.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.132 Initialization complete. Launching workers. 00:29:07.132 ======================================================== 00:29:07.132 Latency(us) 00:29:07.132 Device Information : IOPS MiB/s Average min max 00:29:07.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.00 0.02 22747.05 161.23 45263.07 00:29:07.132 ======================================================== 00:29:07.132 Total : 44.00 0.02 22747.05 161.23 45263.07 00:29:07.132 00:29:07.132 23:25:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.132 23:25:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.132 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.130 Initializing NVMe Controllers 00:29:17.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.130 Initialization complete. Launching workers. 00:29:17.130 ======================================================== 00:29:17.130 Latency(us) 00:29:17.130 Device Information : IOPS MiB/s Average min max 00:29:17.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.70 7.84 15974.88 7976.40 51877.59 00:29:17.130 ======================================================== 00:29:17.130 Total : 62.70 7.84 15974.88 7976.40 51877.59 00:29:17.130 00:29:17.130 23:25:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:17.130 23:25:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:17.130 23:25:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.130 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.133 Initializing NVMe Controllers 00:29:27.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.133 Initialization complete. Launching workers. 00:29:27.133 ======================================================== 00:29:27.133 Latency(us) 00:29:27.133 Device Information : IOPS MiB/s Average min max 00:29:27.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8978.65 4.38 3564.52 364.09 10463.40 00:29:27.133 ======================================================== 00:29:27.133 Total : 8978.65 4.38 3564.52 364.09 10463.40 00:29:27.133 00:29:27.133 23:25:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:27.133 23:25:48 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:27.133 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.136 Initializing NVMe Controllers 00:29:37.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.136 Initialization complete. Launching workers. 00:29:37.136 ======================================================== 00:29:37.136 Latency(us) 00:29:37.136 Device Information : IOPS MiB/s Average min max 00:29:37.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3173.30 396.66 10086.06 736.15 22737.08 00:29:37.136 ======================================================== 00:29:37.136 Total : 3173.30 396.66 10086.06 736.15 22737.08 00:29:37.136 00:29:37.136 23:25:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:37.136 23:25:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:37.136 23:25:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.136 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.138 Initializing NVMe Controllers 00:29:47.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.138 Controller IO queue size 128, less than required. 00:29:47.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.138 Initialization complete. Launching workers. 00:29:47.138 ======================================================== 00:29:47.138 Latency(us) 00:29:47.138 Device Information : IOPS MiB/s Average min max 00:29:47.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15849.60 7.74 8079.78 2126.34 48462.57 00:29:47.138 ======================================================== 00:29:47.138 Total : 15849.60 7.74 8079.78 2126.34 48462.57 00:29:47.138 00:29:47.138 23:26:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:47.138 23:26:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.138 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.134 Initializing NVMe Controllers 00:29:57.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.134 Controller IO queue size 128, less than required. 00:29:57.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.134 Initialization complete. Launching workers. 00:29:57.134 ======================================================== 00:29:57.134 Latency(us) 00:29:57.134 Device Information : IOPS MiB/s Average min max 00:29:57.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1178.50 147.31 109225.64 15226.52 228187.20 00:29:57.134 ======================================================== 00:29:57.134 Total : 1178.50 147.31 109225.64 15226.52 228187.20 00:29:57.134 00:29:57.134 23:26:19 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.134 23:26:19 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44e03a0a-cfbd-4fad-83dc-78659f8ef1db 00:29:58.519 23:26:21 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:58.779 23:26:21 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cbe7ac2f-6f58-4ff1-abed-d01ed7611bb8 00:29:59.040 23:26:21 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:59.040 23:26:21 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:59.040 23:26:21 -- host/perf.sh@114 -- # nvmftestfini 00:29:59.040 23:26:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:59.041 23:26:21 -- nvmf/common.sh@116 -- # sync 00:29:59.041 23:26:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:59.041 23:26:21 -- nvmf/common.sh@119 -- # set +e 00:29:59.041 23:26:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:59.041 23:26:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:59.041 rmmod nvme_tcp 00:29:59.041 rmmod nvme_fabrics 00:29:59.041 rmmod nvme_keyring 00:29:59.301 23:26:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:59.301 23:26:21 -- nvmf/common.sh@123 -- # set -e 00:29:59.301 23:26:21 -- nvmf/common.sh@124 -- # return 0 00:29:59.301 23:26:21 -- nvmf/common.sh@477 -- # '[' -n 2985393 ']' 00:29:59.301 23:26:21 -- nvmf/common.sh@478 -- # killprocess 2985393 00:29:59.301 23:26:21 -- common/autotest_common.sh@926 -- # '[' -z 2985393 ']' 00:29:59.301 23:26:21 -- common/autotest_common.sh@930 -- # kill -0 2985393 00:29:59.301 23:26:21 -- common/autotest_common.sh@931 -- # uname 00:29:59.301 23:26:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:59.301 23:26:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2985393 00:29:59.301 23:26:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:59.301 23:26:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:59.301 23:26:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2985393' 00:29:59.301 killing process with pid 2985393 00:29:59.301 23:26:21 -- common/autotest_common.sh@945 -- # kill 2985393 00:29:59.301 23:26:21 -- common/autotest_common.sh@950 -- # wait 2985393 00:30:01.212 23:26:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:01.212 23:26:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:01.212 23:26:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:01.212 23:26:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.212 23:26:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:01.212 23:26:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.212 23:26:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.212 23:26:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.757 23:26:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:03.757 00:30:03.757 real 1m31.508s 00:30:03.757 user 5m24.522s 00:30:03.757 sys 0m13.953s 00:30:03.757 23:26:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.757 23:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:03.757 ************************************ 00:30:03.757 END TEST nvmf_perf 00:30:03.757 ************************************ 00:30:03.757 23:26:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:03.757 23:26:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:03.757 23:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.757 23:26:25 -- common/autotest_common.sh@10 -- # set +x 00:30:03.757 ************************************ 00:30:03.757 START TEST nvmf_fio_host 00:30:03.757 ************************************ 00:30:03.757 23:26:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:03.757 * Looking for test storage... 00:30:03.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.757 23:26:25 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.757 23:26:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.757 23:26:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.757 23:26:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.757 23:26:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:25 -- paths/export.sh@5 -- # export PATH 00:30:03.758 23:26:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:25 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.758 23:26:25 -- nvmf/common.sh@7 -- # uname -s 00:30:03.758 23:26:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.758 23:26:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.758 23:26:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.758 23:26:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.758 23:26:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.758 23:26:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.758 23:26:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.758 23:26:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.758 23:26:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.758 23:26:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.758 23:26:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:03.758 23:26:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:03.758 23:26:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.758 23:26:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.758 23:26:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.758 23:26:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.758 23:26:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.758 23:26:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.758 23:26:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.758 23:26:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:26 -- paths/export.sh@5 -- # export PATH 00:30:03.758 23:26:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.758 23:26:26 -- nvmf/common.sh@46 -- # : 0 00:30:03.758 23:26:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:03.758 23:26:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:03.758 23:26:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:03.758 23:26:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.758 23:26:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.758 23:26:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:03.758 23:26:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:03.758 23:26:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:03.758 23:26:26 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:03.758 23:26:26 -- host/fio.sh@14 -- # nvmftestinit 00:30:03.758 23:26:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:03.758 23:26:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.758 23:26:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:03.758 23:26:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:03.758 23:26:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:03.758 23:26:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.758 23:26:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.758 23:26:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.758 23:26:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:03.758 23:26:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:03.758 23:26:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:03.758 23:26:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.414 23:26:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:10.414 23:26:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:10.414 23:26:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:10.414 23:26:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:10.414 23:26:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:10.414 23:26:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:10.414 23:26:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:10.414 23:26:32 -- nvmf/common.sh@294 -- # net_devs=() 00:30:10.414 23:26:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:10.414 23:26:32 -- nvmf/common.sh@295 -- # e810=() 00:30:10.414 23:26:32 -- nvmf/common.sh@295 -- # local -ga e810 00:30:10.414 23:26:32 -- nvmf/common.sh@296 -- # x722=() 00:30:10.414 23:26:32 -- nvmf/common.sh@296 -- # local -ga x722 00:30:10.414 23:26:32 -- nvmf/common.sh@297 -- # mlx=() 00:30:10.414 23:26:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:10.414 23:26:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.414 23:26:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:10.414 23:26:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:10.414 23:26:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:10.414 23:26:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.414 23:26:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:10.414 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:10.414 23:26:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.414 23:26:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:10.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:10.414 23:26:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:10.414 23:26:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.414 23:26:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.414 23:26:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.414 23:26:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.414 23:26:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:10.414 Found net devices under 0000:31:00.0: cvl_0_0 00:30:10.414 23:26:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.414 23:26:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.414 23:26:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.414 23:26:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.414 23:26:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.414 23:26:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:10.414 Found net devices under 0000:31:00.1: cvl_0_1 00:30:10.414 23:26:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.414 23:26:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:10.414 23:26:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:10.414 23:26:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:10.414 23:26:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:10.414 23:26:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.414 23:26:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.415 23:26:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.415 23:26:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:10.415 23:26:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.415 23:26:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.415 23:26:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:10.415 23:26:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.415 23:26:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.415 23:26:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:10.415 23:26:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:10.415 23:26:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.415 23:26:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.415 23:26:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.415 23:26:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.415 23:26:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:10.415 23:26:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.415 23:26:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.415 23:26:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.415 23:26:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:10.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:30:10.415 00:30:10.415 --- 10.0.0.2 ping statistics --- 00:30:10.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.415 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:30:10.415 23:26:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:30:10.415 00:30:10.415 --- 10.0.0.1 ping statistics --- 00:30:10.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.415 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:10.415 23:26:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.415 23:26:32 -- nvmf/common.sh@410 -- # return 0 00:30:10.415 23:26:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:10.415 23:26:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.415 23:26:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:10.415 23:26:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:10.415 23:26:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.415 23:26:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:10.415 23:26:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:10.415 23:26:32 -- host/fio.sh@16 -- # [[ y != y ]] 00:30:10.415 23:26:32 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:10.415 23:26:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:10.415 23:26:32 -- common/autotest_common.sh@10 -- # set +x 00:30:10.415 23:26:33 -- host/fio.sh@24 -- # nvmfpid=3005436 00:30:10.415 23:26:33 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.415 23:26:33 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:10.415 23:26:33 -- host/fio.sh@28 -- # waitforlisten 3005436 00:30:10.415 23:26:33 -- common/autotest_common.sh@819 -- # '[' -z 3005436 ']' 00:30:10.415 23:26:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.415 23:26:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.415 23:26:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.415 23:26:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.415 23:26:33 -- common/autotest_common.sh@10 -- # set +x 00:30:10.415 [2024-06-07 23:26:33.059727] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:10.415 [2024-06-07 23:26:33.059787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.674 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.674 [2024-06-07 23:26:33.131379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.674 [2024-06-07 23:26:33.169667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.674 [2024-06-07 23:26:33.169811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.674 [2024-06-07 23:26:33.169821] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.674 [2024-06-07 23:26:33.169830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.674 [2024-06-07 23:26:33.169969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.674 [2024-06-07 23:26:33.170091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.675 [2024-06-07 23:26:33.170267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.675 [2024-06-07 23:26:33.170270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.247 23:26:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.247 23:26:33 -- common/autotest_common.sh@852 -- # return 0 00:30:11.247 23:26:33 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.507 [2024-06-07 23:26:33.967821] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.507 23:26:33 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:11.507 23:26:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:11.507 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:30:11.507 23:26:34 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:11.767 Malloc1 00:30:11.767 23:26:34 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.767 23:26:34 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:12.027 23:26:34 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.027 [2024-06-07 23:26:34.677464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.027 23:26:34 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.288 23:26:34 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:12.288 23:26:34 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.288 23:26:34 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.288 23:26:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:12.288 23:26:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.288 23:26:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:12.288 23:26:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.288 23:26:34 -- common/autotest_common.sh@1320 -- # shift 00:30:12.288 23:26:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:12.288 23:26:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:12.288 23:26:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:12.288 23:26:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:12.288 23:26:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:12.288 23:26:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:12.288 23:26:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:12.288 23:26:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.856 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:12.856 fio-3.35 00:30:12.856 Starting 1 thread 00:30:12.856 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.404 00:30:15.404 test: (groupid=0, jobs=1): err= 0: pid=3005999: Fri Jun 7 23:26:37 2024 00:30:15.404 read: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(117MiB/2004msec) 00:30:15.404 slat (usec): min=2, max=274, avg= 2.16, stdev= 2.25 00:30:15.404 clat (usec): min=3130, max=9150, avg=4733.72, stdev=404.03 00:30:15.404 lat (usec): min=3132, max=9155, avg=4735.87, stdev=404.24 00:30:15.404 clat percentiles (usec): 00:30:15.404 | 1.00th=[ 3916], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4424], 00:30:15.404 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4817], 00:30:15.404 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5342], 00:30:15.404 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 8291], 99.95th=[ 8586], 00:30:15.404 | 99.99th=[ 9110] 00:30:15.404 bw ( KiB/s): min=58224, max=60600, per=99.98%, avg=59760.00, stdev=1066.31, samples=4 00:30:15.404 iops : min=14556, max=15150, avg=14940.00, stdev=266.58, samples=4 00:30:15.404 write: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(117MiB/2004msec); 0 zone resets 00:30:15.404 slat (usec): min=2, max=233, avg= 2.25, stdev= 1.53 00:30:15.404 clat (usec): min=2555, max=7070, avg=3790.77, stdev=329.30 00:30:15.404 lat (usec): min=2557, max=7072, avg=3793.02, stdev=329.54 00:30:15.404 clat percentiles (usec): 00:30:15.404 | 1.00th=[ 3097], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3556], 00:30:15.404 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:30:15.404 | 70.00th=[ 3916], 80.00th=[ 4015], 90.00th=[ 4113], 95.00th=[ 4228], 00:30:15.404 | 99.00th=[ 4555], 99.50th=[ 5342], 99.90th=[ 6718], 99.95th=[ 6915], 00:30:15.404 | 99.99th=[ 7046] 00:30:15.404 bw ( KiB/s): min=58688, max=60480, per=99.99%, avg=59776.00, stdev=773.43, samples=4 00:30:15.404 iops : min=14672, max=15120, avg=14944.00, stdev=193.36, samples=4 00:30:15.404 lat (msec) : 4=40.67%, 10=59.33% 00:30:15.404 cpu : usr=69.75%, sys=26.01%, ctx=32, majf=0, minf=6 00:30:15.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:15.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.404 issued rwts: total=29945,29950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.404 00:30:15.404 Run status group 0 (all jobs): 00:30:15.404 READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=117MiB (123MB), run=2004-2004msec 00:30:15.404 WRITE: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=117MiB (123MB), run=2004-2004msec 00:30:15.404 23:26:37 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:15.404 23:26:37 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:15.404 23:26:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:15.404 23:26:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:15.404 23:26:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:15.404 23:26:37 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:15.404 23:26:37 -- common/autotest_common.sh@1320 -- # shift 00:30:15.404 23:26:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:15.404 23:26:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:15.404 23:26:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:15.404 23:26:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:15.404 23:26:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:15.404 23:26:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:15.404 23:26:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:15.404 23:26:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:15.664 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:15.664 fio-3.35 00:30:15.664 Starting 1 thread 00:30:15.664 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.213 00:30:18.213 test: (groupid=0, jobs=1): err= 0: pid=3006770: Fri Jun 7 23:26:40 2024 00:30:18.213 read: IOPS=8930, BW=140MiB/s (146MB/s)(280MiB/2007msec) 00:30:18.213 slat (usec): min=3, max=120, avg= 3.68, stdev= 1.86 00:30:18.213 clat (usec): min=1389, max=15520, avg=8807.78, stdev=2063.33 00:30:18.213 lat (usec): min=1393, max=15523, avg=8811.46, stdev=2063.52 00:30:18.213 clat percentiles (usec): 00:30:18.213 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 6849], 00:30:18.213 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:30:18.213 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11600], 95.00th=[11994], 00:30:18.213 | 99.00th=[13304], 99.50th=[13960], 99.90th=[14877], 99.95th=[15139], 00:30:18.213 | 99.99th=[15533] 00:30:18.213 bw ( KiB/s): min=62688, max=81632, per=50.39%, avg=72008.00, stdev=8762.29, samples=4 00:30:18.213 iops : min= 3918, max= 5102, avg=4500.50, stdev=547.64, samples=4 00:30:18.213 write: IOPS=5398, BW=84.4MiB/s (88.5MB/s)(146MiB/1735msec); 0 zone resets 00:30:18.213 slat (usec): min=39, max=444, avg=41.16, stdev= 8.45 00:30:18.213 clat (usec): min=2817, max=15400, avg=9554.41, stdev=1456.78 00:30:18.213 lat (usec): min=2857, max=15531, avg=9595.57, stdev=1458.66 00:30:18.213 clat percentiles (usec): 00:30:18.213 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8356], 00:30:18.213 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:30:18.213 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12125], 00:30:18.213 | 99.00th=[13435], 99.50th=[13829], 99.90th=[15139], 99.95th=[15270], 00:30:18.213 | 99.99th=[15401] 00:30:18.213 bw ( KiB/s): min=65888, max=84864, per=86.75%, avg=74936.00, stdev=8673.29, samples=4 00:30:18.213 iops : min= 4118, max= 5304, avg=4683.50, stdev=542.08, samples=4 00:30:18.213 lat (msec) : 2=0.04%, 4=0.36%, 10=68.58%, 20=31.02% 00:30:18.213 cpu : usr=82.30%, sys=14.31%, ctx=14, majf=0, minf=31 00:30:18.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:30:18.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:18.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:18.213 issued rwts: total=17924,9367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:18.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:18.213 00:30:18.213 Run status group 0 (all jobs): 00:30:18.213 READ: bw=140MiB/s (146MB/s), 140MiB/s-140MiB/s (146MB/s-146MB/s), io=280MiB (294MB), run=2007-2007msec 00:30:18.213 WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=146MiB (153MB), run=1735-1735msec 00:30:18.213 23:26:40 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.213 23:26:40 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:18.213 23:26:40 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:18.213 23:26:40 -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:18.213 23:26:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:18.213 23:26:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:18.213 23:26:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:18.213 23:26:40 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:18.213 23:26:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:18.213 23:26:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:18.213 23:26:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:30:18.213 23:26:40 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:30:18.474 Nvme0n1 00:30:18.475 23:26:41 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:19.048 23:26:41 -- host/fio.sh@53 -- # ls_guid=8d8eae79-897d-4160-b931-04ebb6665dde 00:30:19.048 23:26:41 -- host/fio.sh@54 -- # get_lvs_free_mb 8d8eae79-897d-4160-b931-04ebb6665dde 00:30:19.048 23:26:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8d8eae79-897d-4160-b931-04ebb6665dde 00:30:19.048 23:26:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:19.048 23:26:41 -- common/autotest_common.sh@1345 -- # local fc 00:30:19.048 23:26:41 -- common/autotest_common.sh@1346 -- # local cs 00:30:19.048 23:26:41 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:19.309 23:26:41 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:19.309 { 00:30:19.309 "uuid": "8d8eae79-897d-4160-b931-04ebb6665dde", 00:30:19.309 "name": "lvs_0", 00:30:19.309 "base_bdev": "Nvme0n1", 00:30:19.309 "total_data_clusters": 1787, 00:30:19.309 "free_clusters": 1787, 00:30:19.309 "block_size": 512, 00:30:19.309 "cluster_size": 1073741824 00:30:19.309 } 00:30:19.309 ]' 00:30:19.309 23:26:41 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8d8eae79-897d-4160-b931-04ebb6665dde") .free_clusters' 00:30:19.309 23:26:41 -- common/autotest_common.sh@1348 -- # fc=1787 00:30:19.309 23:26:41 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8d8eae79-897d-4160-b931-04ebb6665dde") .cluster_size' 00:30:19.309 23:26:41 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:19.309 23:26:41 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:30:19.309 23:26:41 -- common/autotest_common.sh@1353 -- # echo 1829888 00:30:19.309 1829888 00:30:19.309 23:26:41 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:30:19.571 1ad5eea2-ed76-47b4-a5df-6a52543710eb 00:30:19.571 23:26:42 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:19.571 23:26:42 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:19.832 23:26:42 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:20.153 23:26:42 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.153 23:26:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.153 23:26:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:20.153 23:26:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.153 23:26:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:20.153 23:26:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.153 23:26:42 -- common/autotest_common.sh@1320 -- # shift 00:30:20.153 23:26:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:20.153 23:26:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:20.153 23:26:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:20.153 23:26:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:20.153 23:26:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:20.153 23:26:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:20.153 23:26:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:20.153 23:26:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.422 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:20.422 fio-3.35 00:30:20.422 Starting 1 thread 00:30:20.422 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.965 [2024-06-07 23:26:45.283511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a710 is same with the state(5) to be set 00:30:22.965 00:30:22.965 test: (groupid=0, jobs=1): err= 0: pid=3007787: Fri Jun 7 23:26:45 2024 00:30:22.965 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(86.6MiB/2005msec) 00:30:22.965 slat (nsec): min=2047, max=108415, avg=2154.32, stdev=998.18 00:30:22.965 clat (usec): min=2132, max=10654, avg=6394.82, stdev=483.85 00:30:22.965 lat (usec): min=2148, max=10656, avg=6396.97, stdev=483.80 00:30:22.965 clat percentiles (usec): 00:30:22.965 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:30:22.965 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:30:22.965 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:30:22.965 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 9241], 00:30:22.965 | 99.99th=[10552] 00:30:22.965 bw ( KiB/s): min=42920, max=44872, per=99.96%, avg=44214.00, stdev=891.90, samples=4 00:30:22.965 iops : min=10730, max=11218, avg=11053.50, stdev=222.98, samples=4 00:30:22.965 write: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.4MiB/2005msec); 0 zone resets 00:30:22.965 slat (nsec): min=2109, max=97181, avg=2268.84, stdev=707.44 00:30:22.965 clat (usec): min=1222, max=9781, avg=5109.78, stdev=422.53 00:30:22.965 lat (usec): min=1229, max=9784, avg=5112.05, stdev=422.51 00:30:22.965 clat percentiles (usec): 00:30:22.965 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 4752], 00:30:22.965 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:30:22.965 | 70.00th=[ 5342], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5735], 00:30:22.965 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 7635], 99.95th=[ 8291], 00:30:22.965 | 99.99th=[ 9110] 00:30:22.965 bw ( KiB/s): min=43304, max=44840, per=99.97%, avg=44106.00, stdev=628.51, samples=4 00:30:22.965 iops : min=10826, max=11210, avg=11026.50, stdev=157.13, samples=4 00:30:22.965 lat (msec) : 2=0.01%, 4=0.25%, 10=99.72%, 20=0.01% 00:30:22.965 cpu : usr=67.96%, sys=28.64%, ctx=52, majf=0, minf=15 00:30:22.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.965 issued rwts: total=22172,22115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.965 00:30:22.965 Run status group 0 (all jobs): 00:30:22.965 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=86.6MiB (90.8MB), run=2005-2005msec 00:30:22.965 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.4MiB (90.6MB), run=2005-2005msec 00:30:22.965 23:26:45 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:22.965 23:26:45 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:23.906 23:26:46 -- host/fio.sh@64 -- # ls_nested_guid=8d8dbfc7-d6b3-4a90-943b-d3388cf5af45 00:30:23.906 23:26:46 -- host/fio.sh@65 -- # get_lvs_free_mb 8d8dbfc7-d6b3-4a90-943b-d3388cf5af45 00:30:23.906 23:26:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8d8dbfc7-d6b3-4a90-943b-d3388cf5af45 00:30:23.906 23:26:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:23.906 23:26:46 -- common/autotest_common.sh@1345 -- # local fc 00:30:23.906 23:26:46 -- common/autotest_common.sh@1346 -- # local cs 00:30:23.906 23:26:46 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:23.906 23:26:46 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:23.906 { 00:30:23.906 "uuid": "8d8eae79-897d-4160-b931-04ebb6665dde", 00:30:23.906 "name": "lvs_0", 00:30:23.906 "base_bdev": "Nvme0n1", 00:30:23.906 "total_data_clusters": 1787, 00:30:23.906 "free_clusters": 0, 00:30:23.906 "block_size": 512, 00:30:23.906 "cluster_size": 1073741824 00:30:23.906 }, 00:30:23.906 { 00:30:23.906 "uuid": "8d8dbfc7-d6b3-4a90-943b-d3388cf5af45", 00:30:23.906 "name": "lvs_n_0", 00:30:23.906 "base_bdev": "1ad5eea2-ed76-47b4-a5df-6a52543710eb", 00:30:23.906 "total_data_clusters": 457025, 00:30:23.906 "free_clusters": 457025, 00:30:23.906 "block_size": 512, 00:30:23.906 "cluster_size": 4194304 00:30:23.906 } 00:30:23.906 ]' 00:30:23.906 23:26:46 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8d8dbfc7-d6b3-4a90-943b-d3388cf5af45") .free_clusters' 00:30:23.906 23:26:46 -- common/autotest_common.sh@1348 -- # fc=457025 00:30:23.906 23:26:46 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8d8dbfc7-d6b3-4a90-943b-d3388cf5af45") .cluster_size' 00:30:23.906 23:26:46 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:23.906 23:26:46 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:30:23.906 23:26:46 -- common/autotest_common.sh@1353 -- # echo 1828100 00:30:23.906 1828100 00:30:23.906 23:26:46 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:30:24.848 02cb2293-1fbf-43da-8d36-935317993b21 00:30:25.109 23:26:47 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:25.109 23:26:47 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:25.371 23:26:47 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:25.371 23:26:48 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.371 23:26:48 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.371 23:26:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:25.371 23:26:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.371 23:26:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:25.371 23:26:48 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.371 23:26:48 -- common/autotest_common.sh@1320 -- # shift 00:30:25.371 23:26:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:25.371 23:26:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.371 23:26:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.371 23:26:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:25.371 23:26:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:25.371 23:26:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:25.371 23:26:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:25.371 23:26:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.665 23:26:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.665 23:26:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:25.665 23:26:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:25.665 23:26:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:25.665 23:26:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:25.665 23:26:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:25.665 23:26:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.934 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:25.934 fio-3.35 00:30:25.934 Starting 1 thread 00:30:25.934 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.476 00:30:28.476 test: (groupid=0, jobs=1): err= 0: pid=3009058: Fri Jun 7 23:26:50 2024 00:30:28.476 read: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(75.9MiB/2006msec) 00:30:28.476 slat (usec): min=2, max=111, avg= 2.23, stdev= 1.08 00:30:28.476 clat (usec): min=2793, max=12616, avg=7309.55, stdev=653.76 00:30:28.476 lat (usec): min=2809, max=12619, avg=7311.78, stdev=653.71 00:30:28.476 clat percentiles (usec): 00:30:28.476 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:30:28.476 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:30:28.476 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:30:28.476 | 99.00th=[ 9634], 99.50th=[10421], 99.90th=[11338], 99.95th=[11994], 00:30:28.476 | 99.99th=[12256] 00:30:28.476 bw ( KiB/s): min=38128, max=39504, per=99.94%, avg=38734.00, stdev=578.60, samples=4 00:30:28.476 iops : min= 9532, max= 9876, avg=9683.50, stdev=144.65, samples=4 00:30:28.476 write: IOPS=9696, BW=37.9MiB/s (39.7MB/s)(76.0MiB/2006msec); 0 zone resets 00:30:28.476 slat (nsec): min=2122, max=96972, avg=2327.58, stdev=748.47 00:30:28.476 clat (usec): min=1143, max=11292, avg=5828.17, stdev=557.46 00:30:28.476 lat (usec): min=1150, max=11295, avg=5830.50, stdev=557.44 00:30:28.476 clat percentiles (usec): 00:30:28.476 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:30:28.476 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:30:28.476 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6652], 00:30:28.476 | 99.00th=[ 7570], 99.50th=[ 8291], 99.90th=[ 9634], 99.95th=[10421], 00:30:28.476 | 99.99th=[11207] 00:30:28.476 bw ( KiB/s): min=38152, max=39424, per=100.00%, avg=38788.00, stdev=527.86, samples=4 00:30:28.476 iops : min= 9538, max= 9856, avg=9697.00, stdev=131.96, samples=4 00:30:28.476 lat (msec) : 2=0.01%, 4=0.12%, 10=99.53%, 20=0.34% 00:30:28.476 cpu : usr=66.58%, sys=30.57%, ctx=36, majf=0, minf=15 00:30:28.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:28.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:28.476 issued rwts: total=19437,19451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:28.476 00:30:28.476 Run status group 0 (all jobs): 00:30:28.476 READ: bw=37.8MiB/s (39.7MB/s), 37.8MiB/s-37.8MiB/s (39.7MB/s-39.7MB/s), io=75.9MiB (79.6MB), run=2006-2006msec 00:30:28.476 WRITE: bw=37.9MiB/s (39.7MB/s), 37.9MiB/s-37.9MiB/s (39.7MB/s-39.7MB/s), io=76.0MiB (79.7MB), run=2006-2006msec 00:30:28.476 23:26:50 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:28.476 23:26:50 -- host/fio.sh@74 -- # sync 00:30:28.476 23:26:50 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:30.390 23:26:52 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:30.650 23:26:53 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:31.219 23:26:53 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:31.219 23:26:53 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:33.758 23:26:55 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:33.758 23:26:55 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:33.758 23:26:55 -- host/fio.sh@86 -- # nvmftestfini 00:30:33.758 23:26:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:33.759 23:26:55 -- nvmf/common.sh@116 -- # sync 00:30:33.759 23:26:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:33.759 23:26:55 -- nvmf/common.sh@119 -- # set +e 00:30:33.759 23:26:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:33.759 23:26:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:33.759 rmmod nvme_tcp 00:30:33.759 rmmod nvme_fabrics 00:30:33.759 rmmod nvme_keyring 00:30:33.759 23:26:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:33.759 23:26:55 -- nvmf/common.sh@123 -- # set -e 00:30:33.759 23:26:55 -- nvmf/common.sh@124 -- # return 0 00:30:33.759 23:26:55 -- nvmf/common.sh@477 -- # '[' -n 3005436 ']' 00:30:33.759 23:26:55 -- nvmf/common.sh@478 -- # killprocess 3005436 00:30:33.759 23:26:55 -- common/autotest_common.sh@926 -- # '[' -z 3005436 ']' 00:30:33.759 23:26:55 -- common/autotest_common.sh@930 -- # kill -0 3005436 00:30:33.759 23:26:55 -- common/autotest_common.sh@931 -- # uname 00:30:33.759 23:26:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:33.759 23:26:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3005436 00:30:33.759 23:26:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:33.759 23:26:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:33.759 23:26:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3005436' 00:30:33.759 killing process with pid 3005436 00:30:33.759 23:26:56 -- common/autotest_common.sh@945 -- # kill 3005436 00:30:33.759 23:26:56 -- common/autotest_common.sh@950 -- # wait 3005436 00:30:33.759 23:26:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:33.759 23:26:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:33.759 23:26:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:33.759 23:26:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:33.759 23:26:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:33.759 23:26:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.759 23:26:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.759 23:26:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.670 23:26:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:35.670 00:30:35.670 real 0m32.316s 00:30:35.670 user 2m41.533s 00:30:35.670 sys 0m9.444s 00:30:35.670 23:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.670 23:26:58 -- common/autotest_common.sh@10 -- # set +x 00:30:35.670 ************************************ 00:30:35.670 END TEST nvmf_fio_host 00:30:35.670 ************************************ 00:30:35.670 23:26:58 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:35.670 23:26:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:35.670 23:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:35.670 23:26:58 -- common/autotest_common.sh@10 -- # set +x 00:30:35.670 ************************************ 00:30:35.670 START TEST nvmf_failover 00:30:35.670 ************************************ 00:30:35.670 23:26:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:35.670 * Looking for test storage... 00:30:35.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.670 23:26:58 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.670 23:26:58 -- nvmf/common.sh@7 -- # uname -s 00:30:35.670 23:26:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.670 23:26:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.670 23:26:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.670 23:26:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.670 23:26:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.670 23:26:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.670 23:26:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.670 23:26:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.670 23:26:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.670 23:26:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.929 23:26:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.929 23:26:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.929 23:26:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.929 23:26:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.929 23:26:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.929 23:26:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.929 23:26:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.929 23:26:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.929 23:26:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.930 23:26:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.930 23:26:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.930 23:26:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.930 23:26:58 -- paths/export.sh@5 -- # export PATH 00:30:35.930 23:26:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.930 23:26:58 -- nvmf/common.sh@46 -- # : 0 00:30:35.930 23:26:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:35.930 23:26:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:35.930 23:26:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:35.930 23:26:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.930 23:26:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.930 23:26:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:35.930 23:26:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:35.930 23:26:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:35.930 23:26:58 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.930 23:26:58 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.930 23:26:58 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:35.930 23:26:58 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:35.930 23:26:58 -- host/failover.sh@18 -- # nvmftestinit 00:30:35.930 23:26:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:35.930 23:26:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.930 23:26:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:35.930 23:26:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:35.930 23:26:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:35.930 23:26:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.930 23:26:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:35.930 23:26:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.930 23:26:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:35.930 23:26:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:35.930 23:26:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:35.930 23:26:58 -- common/autotest_common.sh@10 -- # set +x 00:30:42.580 23:27:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:42.580 23:27:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:42.580 23:27:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:42.580 23:27:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:42.580 23:27:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:42.580 23:27:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:42.580 23:27:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:42.580 23:27:05 -- nvmf/common.sh@294 -- # net_devs=() 00:30:42.580 23:27:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:42.580 23:27:05 -- nvmf/common.sh@295 -- # e810=() 00:30:42.580 23:27:05 -- nvmf/common.sh@295 -- # local -ga e810 00:30:42.580 23:27:05 -- nvmf/common.sh@296 -- # x722=() 00:30:42.580 23:27:05 -- nvmf/common.sh@296 -- # local -ga x722 00:30:42.580 23:27:05 -- nvmf/common.sh@297 -- # mlx=() 00:30:42.580 23:27:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:42.580 23:27:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.580 23:27:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:42.580 23:27:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:42.580 23:27:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:42.580 23:27:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:42.580 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:42.580 23:27:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:42.580 23:27:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:42.580 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:42.580 23:27:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:42.580 23:27:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.580 23:27:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.580 23:27:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:42.580 Found net devices under 0000:31:00.0: cvl_0_0 00:30:42.580 23:27:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.580 23:27:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:42.580 23:27:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.580 23:27:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.580 23:27:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:42.580 Found net devices under 0000:31:00.1: cvl_0_1 00:30:42.580 23:27:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.580 23:27:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:42.580 23:27:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:42.580 23:27:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:42.580 23:27:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.580 23:27:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.580 23:27:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.580 23:27:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:42.580 23:27:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.580 23:27:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.580 23:27:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:42.580 23:27:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.580 23:27:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.580 23:27:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:42.580 23:27:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:42.580 23:27:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.580 23:27:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.580 23:27:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.580 23:27:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.580 23:27:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:42.580 23:27:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.841 23:27:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.841 23:27:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.841 23:27:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:42.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:30:42.841 00:30:42.841 --- 10.0.0.2 ping statistics --- 00:30:42.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.841 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:30:42.841 23:27:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:30:42.841 00:30:42.841 --- 10.0.0.1 ping statistics --- 00:30:42.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.841 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:30:42.841 23:27:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.841 23:27:05 -- nvmf/common.sh@410 -- # return 0 00:30:42.841 23:27:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:42.841 23:27:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.841 23:27:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:42.841 23:27:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:42.841 23:27:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.841 23:27:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:42.841 23:27:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:42.841 23:27:05 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:42.841 23:27:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:42.841 23:27:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:42.841 23:27:05 -- common/autotest_common.sh@10 -- # set +x 00:30:42.841 23:27:05 -- nvmf/common.sh@469 -- # nvmfpid=3014764 00:30:42.841 23:27:05 -- nvmf/common.sh@470 -- # waitforlisten 3014764 00:30:42.841 23:27:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:42.841 23:27:05 -- common/autotest_common.sh@819 -- # '[' -z 3014764 ']' 00:30:42.841 23:27:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.841 23:27:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:42.841 23:27:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.841 23:27:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:42.841 23:27:05 -- common/autotest_common.sh@10 -- # set +x 00:30:42.841 [2024-06-07 23:27:05.480869] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:42.841 [2024-06-07 23:27:05.480928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.841 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.102 [2024-06-07 23:27:05.570521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.102 [2024-06-07 23:27:05.615562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:43.102 [2024-06-07 23:27:05.615717] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.102 [2024-06-07 23:27:05.615728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.102 [2024-06-07 23:27:05.615737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.102 [2024-06-07 23:27:05.615896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.102 [2024-06-07 23:27:05.616054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.102 [2024-06-07 23:27:05.616055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.674 23:27:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:43.674 23:27:06 -- common/autotest_common.sh@852 -- # return 0 00:30:43.674 23:27:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:43.674 23:27:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:43.674 23:27:06 -- common/autotest_common.sh@10 -- # set +x 00:30:43.674 23:27:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.674 23:27:06 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:43.934 [2024-06-07 23:27:06.414945] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.934 23:27:06 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:43.934 Malloc0 00:30:44.195 23:27:06 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.195 23:27:06 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.456 23:27:06 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.456 [2024-06-07 23:27:07.092300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.456 23:27:07 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.717 [2024-06-07 23:27:07.248711] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:44.717 23:27:07 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:44.978 [2024-06-07 23:27:07.405231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:44.978 23:27:07 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:44.978 23:27:07 -- host/failover.sh@31 -- # bdevperf_pid=3015129 00:30:44.978 23:27:07 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.978 23:27:07 -- host/failover.sh@34 -- # waitforlisten 3015129 /var/tmp/bdevperf.sock 00:30:44.978 23:27:07 -- common/autotest_common.sh@819 -- # '[' -z 3015129 ']' 00:30:44.978 23:27:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.978 23:27:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:44.978 23:27:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.978 23:27:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:44.978 23:27:07 -- common/autotest_common.sh@10 -- # set +x 00:30:45.921 23:27:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:45.921 23:27:08 -- common/autotest_common.sh@852 -- # return 0 00:30:45.921 23:27:08 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.921 NVMe0n1 00:30:45.921 23:27:08 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.492 00:30:46.492 23:27:08 -- host/failover.sh@39 -- # run_test_pid=3015659 00:30:46.492 23:27:08 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:46.493 23:27:08 -- host/failover.sh@41 -- # sleep 1 00:30:47.435 23:27:09 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.435 [2024-06-07 23:27:10.086548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.435 [2024-06-07 23:27:10.086737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ca4f0 is same with the state(5) to be set 00:30:47.696 23:27:10 -- host/failover.sh@45 -- # sleep 3 00:30:50.997 23:27:13 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.997 00:30:50.997 23:27:13 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:50.997 [2024-06-07 23:27:13.670821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.670995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:50.997 [2024-06-07 23:27:13.671079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cba40 is same with the state(5) to be set 00:30:51.257 23:27:13 -- host/failover.sh@50 -- # sleep 3 00:30:54.554 23:27:16 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.554 [2024-06-07 23:27:16.839433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.554 23:27:16 -- host/failover.sh@55 -- # sleep 1 00:30:55.497 23:27:17 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:55.497 [2024-06-07 23:27:18.011444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 [2024-06-07 23:27:18.011715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81cd0 is same with the state(5) to be set 00:30:55.497 23:27:18 -- host/failover.sh@59 -- # wait 3015659 00:31:02.086 0 00:31:02.086 23:27:24 -- host/failover.sh@61 -- # killprocess 3015129 00:31:02.086 23:27:24 -- common/autotest_common.sh@926 -- # '[' -z 3015129 ']' 00:31:02.086 23:27:24 -- common/autotest_common.sh@930 -- # kill -0 3015129 00:31:02.086 23:27:24 -- common/autotest_common.sh@931 -- # uname 00:31:02.086 23:27:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:02.086 23:27:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3015129 00:31:02.086 23:27:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:02.086 23:27:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:02.086 23:27:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3015129' 00:31:02.086 killing process with pid 3015129 00:31:02.086 23:27:24 -- common/autotest_common.sh@945 -- # kill 3015129 00:31:02.086 23:27:24 -- common/autotest_common.sh@950 -- # wait 3015129 00:31:02.087 23:27:24 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:02.087 [2024-06-07 23:27:07.475548] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:02.087 [2024-06-07 23:27:07.475644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015129 ] 00:31:02.087 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.087 [2024-06-07 23:27:07.539900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.087 [2024-06-07 23:27:07.569411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.087 Running I/O for 15 seconds... 00:31:02.087 [2024-06-07 23:27:10.087549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.087949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.087983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.087993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.087 [2024-06-07 23:27:10.088131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.087 [2024-06-07 23:27:10.088173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.087 [2024-06-07 23:27:10.088181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.088 [2024-06-07 23:27:10.088795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.088 [2024-06-07 23:27:10.088830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.088 [2024-06-07 23:27:10.088839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.088846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.088990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.088997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.089 [2024-06-07 23:27:10.089463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.089 [2024-06-07 23:27:10.089496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.089 [2024-06-07 23:27:10.089504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.090 [2024-06-07 23:27:10.089512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.090 [2024-06-07 23:27:10.089529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.090 [2024-06-07 23:27:10.089545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:10.089696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.090 [2024-06-07 23:27:10.089724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.090 [2024-06-07 23:27:10.089731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38400 len:8 PRP1 0x0 PRP2 0x0 00:31:02.090 [2024-06-07 23:27:10.089740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089778] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec4890 was disconnected and freed. reset controller. 00:31:02.090 [2024-06-07 23:27:10.089793] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:02.090 [2024-06-07 23:27:10.089813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.090 [2024-06-07 23:27:10.089822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.090 [2024-06-07 23:27:10.089838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.090 [2024-06-07 23:27:10.089853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.090 [2024-06-07 23:27:10.089868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:10.089876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.090 [2024-06-07 23:27:10.092067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:02.090 [2024-06-07 23:27:10.092089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea5c10 (9): Bad file descriptor 00:31:02.090 [2024-06-07 23:27:10.126817] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:02.090 [2024-06-07 23:27:13.672174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.090 [2024-06-07 23:27:13.672394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.090 [2024-06-07 23:27:13.672442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.090 [2024-06-07 23:27:13.672532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.090 [2024-06-07 23:27:13.672539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.672971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.091 [2024-06-07 23:27:13.672987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.672996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.673012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.091 [2024-06-07 23:27:13.673019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.091 [2024-06-07 23:27:13.673028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.092 [2024-06-07 23:27:13.673615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.092 [2024-06-07 23:27:13.673655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.092 [2024-06-07 23:27:13.673662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.673967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.673999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.674014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.674047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.093 [2024-06-07 23:27:13.674162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.093 [2024-06-07 23:27:13.674180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.093 [2024-06-07 23:27:13.674209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:31:02.093 [2024-06-07 23:27:13.674216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.093 [2024-06-07 23:27:13.674233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.093 [2024-06-07 23:27:13.674239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:31:02.093 [2024-06-07 23:27:13.674250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.093 [2024-06-07 23:27:13.674262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.093 [2024-06-07 23:27:13.674268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:31:02.093 [2024-06-07 23:27:13.674275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.093 [2024-06-07 23:27:13.674287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.093 [2024-06-07 23:27:13.674293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:31:02.093 [2024-06-07 23:27:13.674300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.093 [2024-06-07 23:27:13.674312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.093 [2024-06-07 23:27:13.674317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.094 [2024-06-07 23:27:13.674323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:31:02.094 [2024-06-07 23:27:13.674330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.094 [2024-06-07 23:27:13.674342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.094 [2024-06-07 23:27:13.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94536 len:8 PRP1 0x0 PRP2 0x0 00:31:02.094 [2024-06-07 23:27:13.674355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674392] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec68a0 was disconnected and freed. reset controller. 00:31:02.094 [2024-06-07 23:27:13.674401] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:02.094 [2024-06-07 23:27:13.674420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.094 [2024-06-07 23:27:13.674428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.094 [2024-06-07 23:27:13.674443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.094 [2024-06-07 23:27:13.674457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.094 [2024-06-07 23:27:13.674472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:13.674479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.094 [2024-06-07 23:27:13.676923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:02.094 [2024-06-07 23:27:13.676946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea5c10 (9): Bad file descriptor 00:31:02.094 [2024-06-07 23:27:13.710258] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:02.094 [2024-06-07 23:27:18.011957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.011993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.094 [2024-06-07 23:27:18.012399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.094 [2024-06-07 23:27:18.012415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.094 [2024-06-07 23:27:18.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.094 [2024-06-07 23:27:18.012490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.095 [2024-06-07 23:27:18.012930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.012986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.012993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.013002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.013009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.013017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.013033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.013041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.013050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.095 [2024-06-07 23:27:18.013067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.095 [2024-06-07 23:27:18.013075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.096 [2024-06-07 23:27:18.013090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.096 [2024-06-07 23:27:18.013185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.096 [2024-06-07 23:27:18.013194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.097 [2024-06-07 23:27:18.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.097 [2024-06-07 23:27:18.013827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.097 [2024-06-07 23:27:18.013836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.098 [2024-06-07 23:27:18.013876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.098 [2024-06-07 23:27:18.013892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.098 [2024-06-07 23:27:18.013908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.013970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:02.098 [2024-06-07 23:27:18.013986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.013995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.014017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.098 [2024-06-07 23:27:18.014033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:02.098 [2024-06-07 23:27:18.014063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:02.098 [2024-06-07 23:27:18.014070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13400 len:8 PRP1 0x0 PRP2 0x0 00:31:02.098 [2024-06-07 23:27:18.014077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014117] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xec8a10 was disconnected and freed. reset controller. 00:31:02.098 [2024-06-07 23:27:18.014127] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:02.098 [2024-06-07 23:27:18.014146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.098 [2024-06-07 23:27:18.014154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.098 [2024-06-07 23:27:18.014170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.098 [2024-06-07 23:27:18.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.098 [2024-06-07 23:27:18.014200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.098 [2024-06-07 23:27:18.014207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.098 [2024-06-07 23:27:18.016577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:02.098 [2024-06-07 23:27:18.016602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea5c10 (9): Bad file descriptor 00:31:02.098 [2024-06-07 23:27:18.050575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:02.098 00:31:02.098 Latency(us) 00:31:02.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.098 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:02.098 Verification LBA range: start 0x0 length 0x4000 00:31:02.098 NVMe0n1 : 15.00 19609.86 76.60 350.62 0.00 6396.32 744.11 12670.29 00:31:02.098 =================================================================================================================== 00:31:02.098 Total : 19609.86 76.60 350.62 0.00 6396.32 744.11 12670.29 00:31:02.098 Received shutdown signal, test time was about 15.000000 seconds 00:31:02.098 00:31:02.098 Latency(us) 00:31:02.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.098 =================================================================================================================== 00:31:02.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.098 23:27:24 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:02.098 23:27:24 -- host/failover.sh@65 -- # count=3 00:31:02.098 23:27:24 -- host/failover.sh@67 -- # (( count != 3 )) 00:31:02.098 23:27:24 -- host/failover.sh@73 -- # bdevperf_pid=3018876 00:31:02.098 23:27:24 -- host/failover.sh@75 -- # waitforlisten 3018876 /var/tmp/bdevperf.sock 00:31:02.098 23:27:24 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:02.098 23:27:24 -- common/autotest_common.sh@819 -- # '[' -z 3018876 ']' 00:31:02.098 23:27:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.098 23:27:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.098 23:27:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.098 23:27:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.098 23:27:24 -- common/autotest_common.sh@10 -- # set +x 00:31:02.667 23:27:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.667 23:27:25 -- common/autotest_common.sh@852 -- # return 0 00:31:02.667 23:27:25 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:02.667 [2024-06-07 23:27:25.205856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:02.667 23:27:25 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:02.927 [2024-06-07 23:27:25.362256] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:02.927 23:27:25 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.187 NVMe0n1 00:31:03.187 23:27:25 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.447 00:31:03.447 23:27:25 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.447 00:31:03.707 23:27:26 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.707 23:27:26 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:03.707 23:27:26 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.967 23:27:26 -- host/failover.sh@87 -- # sleep 3 00:31:07.266 23:27:29 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.266 23:27:29 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:07.266 23:27:29 -- host/failover.sh@90 -- # run_test_pid=3019971 00:31:07.266 23:27:29 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:07.266 23:27:29 -- host/failover.sh@92 -- # wait 3019971 00:31:08.208 0 00:31:08.208 23:27:30 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:08.208 [2024-06-07 23:27:24.308850] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:08.208 [2024-06-07 23:27:24.308908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018876 ] 00:31:08.208 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.208 [2024-06-07 23:27:24.368829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.208 [2024-06-07 23:27:24.397644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.208 [2024-06-07 23:27:26.442319] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:08.208 [2024-06-07 23:27:26.442366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.208 [2024-06-07 23:27:26.442377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.208 [2024-06-07 23:27:26.442386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.208 [2024-06-07 23:27:26.442394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.208 [2024-06-07 23:27:26.442402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.208 [2024-06-07 23:27:26.442409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.208 [2024-06-07 23:27:26.442416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.208 [2024-06-07 23:27:26.442423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.208 [2024-06-07 23:27:26.442430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.208 [2024-06-07 23:27:26.442452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.208 [2024-06-07 23:27:26.442467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f28c10 (9): Bad file descriptor 00:31:08.208 [2024-06-07 23:27:26.449091] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.208 Running I/O for 1 seconds... 00:31:08.208 00:31:08.208 Latency(us) 00:31:08.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.208 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:08.208 Verification LBA range: start 0x0 length 0x4000 00:31:08.208 NVMe0n1 : 1.00 19873.21 77.63 0.00 0.00 6410.62 976.21 13653.33 00:31:08.208 =================================================================================================================== 00:31:08.208 Total : 19873.21 77.63 0.00 0.00 6410.62 976.21 13653.33 00:31:08.208 23:27:30 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:08.208 23:27:30 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:08.469 23:27:30 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.469 23:27:31 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:08.469 23:27:31 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:08.729 23:27:31 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.729 23:27:31 -- host/failover.sh@101 -- # sleep 3 00:31:12.027 23:27:34 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:12.027 23:27:34 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:12.027 23:27:34 -- host/failover.sh@108 -- # killprocess 3018876 00:31:12.027 23:27:34 -- common/autotest_common.sh@926 -- # '[' -z 3018876 ']' 00:31:12.027 23:27:34 -- common/autotest_common.sh@930 -- # kill -0 3018876 00:31:12.027 23:27:34 -- common/autotest_common.sh@931 -- # uname 00:31:12.027 23:27:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:12.027 23:27:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3018876 00:31:12.027 23:27:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:12.027 23:27:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:12.027 23:27:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3018876' 00:31:12.027 killing process with pid 3018876 00:31:12.027 23:27:34 -- common/autotest_common.sh@945 -- # kill 3018876 00:31:12.027 23:27:34 -- common/autotest_common.sh@950 -- # wait 3018876 00:31:12.287 23:27:34 -- host/failover.sh@110 -- # sync 00:31:12.287 23:27:34 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.287 23:27:34 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:12.287 23:27:34 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.287 23:27:34 -- host/failover.sh@116 -- # nvmftestfini 00:31:12.287 23:27:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:12.287 23:27:34 -- nvmf/common.sh@116 -- # sync 00:31:12.287 23:27:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:12.287 23:27:34 -- nvmf/common.sh@119 -- # set +e 00:31:12.287 23:27:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:12.287 23:27:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:12.287 rmmod nvme_tcp 00:31:12.287 rmmod nvme_fabrics 00:31:12.287 rmmod nvme_keyring 00:31:12.287 23:27:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:12.287 23:27:34 -- nvmf/common.sh@123 -- # set -e 00:31:12.548 23:27:34 -- nvmf/common.sh@124 -- # return 0 00:31:12.548 23:27:34 -- nvmf/common.sh@477 -- # '[' -n 3014764 ']' 00:31:12.548 23:27:34 -- nvmf/common.sh@478 -- # killprocess 3014764 00:31:12.548 23:27:34 -- common/autotest_common.sh@926 -- # '[' -z 3014764 ']' 00:31:12.548 23:27:34 -- common/autotest_common.sh@930 -- # kill -0 3014764 00:31:12.548 23:27:34 -- common/autotest_common.sh@931 -- # uname 00:31:12.548 23:27:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:12.548 23:27:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3014764 00:31:12.548 23:27:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:12.548 23:27:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:12.548 23:27:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3014764' 00:31:12.548 killing process with pid 3014764 00:31:12.548 23:27:35 -- common/autotest_common.sh@945 -- # kill 3014764 00:31:12.548 23:27:35 -- common/autotest_common.sh@950 -- # wait 3014764 00:31:12.548 23:27:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:12.548 23:27:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:12.548 23:27:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:12.548 23:27:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.548 23:27:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:12.548 23:27:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.548 23:27:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:12.548 23:27:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.093 23:27:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:15.093 00:31:15.093 real 0m38.972s 00:31:15.093 user 2m0.392s 00:31:15.093 sys 0m8.061s 00:31:15.093 23:27:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.093 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:31:15.093 ************************************ 00:31:15.093 END TEST nvmf_failover 00:31:15.093 ************************************ 00:31:15.093 23:27:37 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:15.093 23:27:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:15.093 23:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:15.093 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:31:15.093 ************************************ 00:31:15.093 START TEST nvmf_discovery 00:31:15.094 ************************************ 00:31:15.094 23:27:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:15.094 * Looking for test storage... 00:31:15.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:15.094 23:27:37 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.094 23:27:37 -- nvmf/common.sh@7 -- # uname -s 00:31:15.094 23:27:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.094 23:27:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.094 23:27:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.094 23:27:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.094 23:27:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.094 23:27:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.094 23:27:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.094 23:27:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.094 23:27:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.094 23:27:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.094 23:27:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.094 23:27:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.094 23:27:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.094 23:27:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.094 23:27:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.094 23:27:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.094 23:27:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.094 23:27:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.094 23:27:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.094 23:27:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.094 23:27:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.094 23:27:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.094 23:27:37 -- paths/export.sh@5 -- # export PATH 00:31:15.094 23:27:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.094 23:27:37 -- nvmf/common.sh@46 -- # : 0 00:31:15.094 23:27:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:15.094 23:27:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:15.094 23:27:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:15.094 23:27:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.094 23:27:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.094 23:27:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:15.094 23:27:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:15.094 23:27:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:15.094 23:27:37 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:15.094 23:27:37 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:15.094 23:27:37 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:15.094 23:27:37 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:15.094 23:27:37 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:15.094 23:27:37 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:15.094 23:27:37 -- host/discovery.sh@25 -- # nvmftestinit 00:31:15.094 23:27:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:15.094 23:27:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.094 23:27:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:15.094 23:27:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:15.094 23:27:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:15.094 23:27:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.094 23:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:15.094 23:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.094 23:27:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:15.094 23:27:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:15.094 23:27:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:15.094 23:27:37 -- common/autotest_common.sh@10 -- # set +x 00:31:21.712 23:27:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:21.712 23:27:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:21.712 23:27:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:21.712 23:27:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:21.712 23:27:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:21.712 23:27:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:21.712 23:27:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:21.712 23:27:44 -- nvmf/common.sh@294 -- # net_devs=() 00:31:21.712 23:27:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:21.712 23:27:44 -- nvmf/common.sh@295 -- # e810=() 00:31:21.712 23:27:44 -- nvmf/common.sh@295 -- # local -ga e810 00:31:21.712 23:27:44 -- nvmf/common.sh@296 -- # x722=() 00:31:21.712 23:27:44 -- nvmf/common.sh@296 -- # local -ga x722 00:31:21.712 23:27:44 -- nvmf/common.sh@297 -- # mlx=() 00:31:21.712 23:27:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:21.712 23:27:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.712 23:27:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:21.712 23:27:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:21.712 23:27:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:21.712 23:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:21.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:21.712 23:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:21.712 23:27:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:21.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:21.712 23:27:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:21.712 23:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.712 23:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.712 23:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:21.712 Found net devices under 0000:31:00.0: cvl_0_0 00:31:21.712 23:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.712 23:27:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:21.712 23:27:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.712 23:27:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.712 23:27:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:21.712 Found net devices under 0000:31:00.1: cvl_0_1 00:31:21.712 23:27:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.712 23:27:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:21.712 23:27:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:21.712 23:27:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:21.712 23:27:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.712 23:27:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.712 23:27:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.712 23:27:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:21.712 23:27:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.712 23:27:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.712 23:27:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:21.712 23:27:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.712 23:27:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.712 23:27:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:21.712 23:27:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:21.712 23:27:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.712 23:27:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.973 23:27:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.973 23:27:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.973 23:27:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:21.973 23:27:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.973 23:27:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.973 23:27:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.973 23:27:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:21.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:31:21.973 00:31:21.973 --- 10.0.0.2 ping statistics --- 00:31:21.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.973 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:31:21.973 23:27:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:31:21.973 00:31:21.973 --- 10.0.0.1 ping statistics --- 00:31:21.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.973 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:31:21.973 23:27:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.973 23:27:44 -- nvmf/common.sh@410 -- # return 0 00:31:21.973 23:27:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:21.973 23:27:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.973 23:27:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:21.973 23:27:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:21.973 23:27:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.973 23:27:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:21.973 23:27:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:21.973 23:27:44 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:21.973 23:27:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:21.973 23:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:21.973 23:27:44 -- common/autotest_common.sh@10 -- # set +x 00:31:21.973 23:27:44 -- nvmf/common.sh@469 -- # nvmfpid=3025087 00:31:21.973 23:27:44 -- nvmf/common.sh@470 -- # waitforlisten 3025087 00:31:21.973 23:27:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:21.973 23:27:44 -- common/autotest_common.sh@819 -- # '[' -z 3025087 ']' 00:31:21.973 23:27:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.973 23:27:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:21.973 23:27:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.973 23:27:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:21.973 23:27:44 -- common/autotest_common.sh@10 -- # set +x 00:31:22.234 [2024-06-07 23:27:44.669933] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:22.234 [2024-06-07 23:27:44.669982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.234 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.234 [2024-06-07 23:27:44.753695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.234 [2024-06-07 23:27:44.795033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:22.234 [2024-06-07 23:27:44.795170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.234 [2024-06-07 23:27:44.795179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.234 [2024-06-07 23:27:44.795187] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.234 [2024-06-07 23:27:44.795216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.805 23:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.805 23:27:45 -- common/autotest_common.sh@852 -- # return 0 00:31:22.805 23:27:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:22.805 23:27:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:22.805 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:22.805 23:27:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.805 23:27:45 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:22.805 23:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.805 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:22.805 [2024-06-07 23:27:45.460869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.805 23:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.805 23:27:45 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:22.805 23:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.805 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:22.805 [2024-06-07 23:27:45.473015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:22.805 23:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.805 23:27:45 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:22.805 23:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.805 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 null0 00:31:23.066 23:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 23:27:45 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:23.066 23:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 null1 00:31:23.066 23:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 23:27:45 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:23.066 23:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 23:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 23:27:45 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:23.066 23:27:45 -- host/discovery.sh@45 -- # hostpid=3025293 00:31:23.066 23:27:45 -- host/discovery.sh@46 -- # waitforlisten 3025293 /tmp/host.sock 00:31:23.066 23:27:45 -- common/autotest_common.sh@819 -- # '[' -z 3025293 ']' 00:31:23.066 23:27:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:23.066 23:27:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:23.066 23:27:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:23.066 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:23.066 23:27:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:23.066 23:27:45 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 [2024-06-07 23:27:45.541291] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:23.066 [2024-06-07 23:27:45.541339] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025293 ] 00:31:23.066 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.066 [2024-06-07 23:27:45.600011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.066 [2024-06-07 23:27:45.629000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:23.066 [2024-06-07 23:27:45.629132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.637 23:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:23.638 23:27:46 -- common/autotest_common.sh@852 -- # return 0 00:31:23.638 23:27:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.638 23:27:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:23.638 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.638 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.638 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.638 23:27:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:23.638 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.638 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.638 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.638 23:27:46 -- host/discovery.sh@72 -- # notify_id=0 00:31:23.898 23:27:46 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:23.898 23:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:23.898 23:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:23.898 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.898 23:27:46 -- host/discovery.sh@59 -- # sort 00:31:23.898 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.898 23:27:46 -- host/discovery.sh@59 -- # xargs 00:31:23.898 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.898 23:27:46 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:23.898 23:27:46 -- host/discovery.sh@79 -- # get_bdev_list 00:31:23.898 23:27:46 -- host/discovery.sh@55 -- # xargs 00:31:23.898 23:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.898 23:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.898 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.898 23:27:46 -- host/discovery.sh@55 -- # sort 00:31:23.898 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.898 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.898 23:27:46 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:23.898 23:27:46 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.898 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.898 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.898 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.898 23:27:46 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # sort 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # xargs 00:31:23.899 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.899 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.899 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.899 23:27:46 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:23.899 23:27:46 -- host/discovery.sh@83 -- # get_bdev_list 00:31:23.899 23:27:46 -- host/discovery.sh@55 -- # xargs 00:31:23.899 23:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.899 23:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.899 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.899 23:27:46 -- host/discovery.sh@55 -- # sort 00:31:23.899 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.899 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.899 23:27:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:23.899 23:27:46 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:23.899 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.899 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.899 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.899 23:27:46 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:23.899 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # sort 00:31:23.899 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:23.899 23:27:46 -- host/discovery.sh@59 -- # xargs 00:31:23.899 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.159 23:27:46 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:24.159 23:27:46 -- host/discovery.sh@87 -- # get_bdev_list 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # xargs 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # sort 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.160 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:24.160 23:27:46 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.160 [2024-06-07 23:27:46.668161] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.160 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:24.160 23:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.160 23:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- host/discovery.sh@59 -- # sort 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.160 23:27:46 -- host/discovery.sh@59 -- # xargs 00:31:24.160 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:24.160 23:27:46 -- host/discovery.sh@93 -- # get_bdev_list 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # xargs 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- host/discovery.sh@55 -- # sort 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.160 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:24.160 23:27:46 -- host/discovery.sh@94 -- # get_notification_count 00:31:24.160 23:27:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:24.160 23:27:46 -- host/discovery.sh@74 -- # jq '. | length' 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.160 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@74 -- # notification_count=0 00:31:24.160 23:27:46 -- host/discovery.sh@75 -- # notify_id=0 00:31:24.160 23:27:46 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:24.160 23:27:46 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:24.160 23:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.160 23:27:46 -- common/autotest_common.sh@10 -- # set +x 00:31:24.420 23:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.420 23:27:46 -- host/discovery.sh@100 -- # sleep 1 00:31:24.991 [2024-06-07 23:27:47.371483] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:24.991 [2024-06-07 23:27:47.371503] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:24.991 [2024-06-07 23:27:47.371517] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.991 [2024-06-07 23:27:47.459788] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:24.991 [2024-06-07 23:27:47.560403] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:24.991 [2024-06-07 23:27:47.560427] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:25.251 23:27:47 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:25.251 23:27:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.251 23:27:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.251 23:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.251 23:27:47 -- host/discovery.sh@59 -- # sort 00:31:25.251 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:25.251 23:27:47 -- host/discovery.sh@59 -- # xargs 00:31:25.252 23:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.252 23:27:47 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.252 23:27:47 -- host/discovery.sh@102 -- # get_bdev_list 00:31:25.252 23:27:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.252 23:27:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.252 23:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.252 23:27:47 -- host/discovery.sh@55 -- # sort 00:31:25.252 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:25.252 23:27:47 -- host/discovery.sh@55 -- # xargs 00:31:25.252 23:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.512 23:27:47 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:25.512 23:27:47 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:25.512 23:27:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:25.512 23:27:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:25.512 23:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.512 23:27:47 -- host/discovery.sh@63 -- # sort -n 00:31:25.512 23:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:25.512 23:27:47 -- host/discovery.sh@63 -- # xargs 00:31:25.512 23:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.512 23:27:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:25.512 23:27:48 -- host/discovery.sh@104 -- # get_notification_count 00:31:25.512 23:27:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:25.512 23:27:48 -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.512 23:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.512 23:27:48 -- common/autotest_common.sh@10 -- # set +x 00:31:25.512 23:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.512 23:27:48 -- host/discovery.sh@74 -- # notification_count=1 00:31:25.512 23:27:48 -- host/discovery.sh@75 -- # notify_id=1 00:31:25.512 23:27:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:25.512 23:27:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:25.512 23:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.512 23:27:48 -- common/autotest_common.sh@10 -- # set +x 00:31:25.512 23:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.512 23:27:48 -- host/discovery.sh@109 -- # sleep 1 00:31:26.453 23:27:49 -- host/discovery.sh@110 -- # get_bdev_list 00:31:26.453 23:27:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.453 23:27:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.453 23:27:49 -- host/discovery.sh@55 -- # sort 00:31:26.453 23:27:49 -- host/discovery.sh@55 -- # xargs 00:31:26.453 23:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.453 23:27:49 -- common/autotest_common.sh@10 -- # set +x 00:31:26.453 23:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.453 23:27:49 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:26.453 23:27:49 -- host/discovery.sh@111 -- # get_notification_count 00:31:26.453 23:27:49 -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.453 23:27:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:26.453 23:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.453 23:27:49 -- common/autotest_common.sh@10 -- # set +x 00:31:26.453 23:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.715 23:27:49 -- host/discovery.sh@74 -- # notification_count=1 00:31:26.715 23:27:49 -- host/discovery.sh@75 -- # notify_id=2 00:31:26.715 23:27:49 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:26.715 23:27:49 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:26.715 23:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.715 23:27:49 -- common/autotest_common.sh@10 -- # set +x 00:31:26.715 [2024-06-07 23:27:49.170822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.715 [2024-06-07 23:27:49.171154] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:26.715 [2024-06-07 23:27:49.171179] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.715 23:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.715 23:27:49 -- host/discovery.sh@117 -- # sleep 1 00:31:26.715 [2024-06-07 23:27:49.259466] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:26.975 [2024-06-07 23:27:49.522785] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:26.975 [2024-06-07 23:27:49.522803] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.975 [2024-06-07 23:27:49.522809] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.546 23:27:50 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:27.546 23:27:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.546 23:27:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.546 23:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.546 23:27:50 -- host/discovery.sh@59 -- # sort 00:31:27.546 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 23:27:50 -- host/discovery.sh@59 -- # xargs 00:31:27.546 23:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@119 -- # get_bdev_list 00:31:27.808 23:27:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.808 23:27:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.808 23:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.808 23:27:50 -- host/discovery.sh@55 -- # sort 00:31:27.808 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.808 23:27:50 -- host/discovery.sh@55 -- # xargs 00:31:27.808 23:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:27.808 23:27:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:27.808 23:27:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.808 23:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.808 23:27:50 -- host/discovery.sh@63 -- # sort -n 00:31:27.808 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.808 23:27:50 -- host/discovery.sh@63 -- # xargs 00:31:27.808 23:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@121 -- # get_notification_count 00:31:27.808 23:27:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:27.808 23:27:50 -- host/discovery.sh@74 -- # jq '. | length' 00:31:27.808 23:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.808 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.808 23:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@74 -- # notification_count=0 00:31:27.808 23:27:50 -- host/discovery.sh@75 -- # notify_id=2 00:31:27.808 23:27:50 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:27.808 23:27:50 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.808 23:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.808 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:31:27.808 [2024-06-07 23:27:50.390395] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:27.808 [2024-06-07 23:27:50.390418] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.808 [2024-06-07 23:27:50.394436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.808 23:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.808 [2024-06-07 23:27:50.394455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.808 [2024-06-07 23:27:50.394464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.808 [2024-06-07 23:27:50.394472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.808 [2024-06-07 23:27:50.394480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.808 [2024-06-07 23:27:50.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.808 [2024-06-07 23:27:50.394495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.808 [2024-06-07 23:27:50.394502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.808 [2024-06-07 23:27:50.394509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 23:27:50 -- host/discovery.sh@127 -- # sleep 1 00:31:27.808 [2024-06-07 23:27:50.404450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.414491] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.414760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.415144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.415154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.415162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.415175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.415193] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.415201] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.415209] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.415222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.424547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.424923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.425269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.425280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.425287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.425298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.425323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.425330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.425337] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.425348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.434599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.434979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.435467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.435504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.435516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.435534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.435565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.435573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.435581] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.435597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.444656] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.445000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.445462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.445499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.445510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.445528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.445540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.445546] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.445554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.445569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.454710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.455081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.455532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.455569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.455580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.455598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.455623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.455635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.455644] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.455667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.464766] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.465134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.465542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.465552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.465561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.465572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.465582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.465588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.465595] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.465606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.474820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.808 [2024-06-07 23:27:50.475086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.475468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.808 [2024-06-07 23:27:50.475505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.808 [2024-06-07 23:27:50.475517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.808 [2024-06-07 23:27:50.475536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.808 [2024-06-07 23:27:50.475550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.808 [2024-06-07 23:27:50.475557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.808 [2024-06-07 23:27:50.475567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.808 [2024-06-07 23:27:50.475584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:27.808 [2024-06-07 23:27:50.484874] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:27.809 [2024-06-07 23:27:50.485180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.809 [2024-06-07 23:27:50.485532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.809 [2024-06-07 23:27:50.485542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:27.809 [2024-06-07 23:27:50.485550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:27.809 [2024-06-07 23:27:50.485561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:27.809 [2024-06-07 23:27:50.485572] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.809 [2024-06-07 23:27:50.485580] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:27.809 [2024-06-07 23:27:50.485594] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.809 [2024-06-07 23:27:50.485605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.070 [2024-06-07 23:27:50.494930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.070 [2024-06-07 23:27:50.495352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.495660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.495669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:28.070 [2024-06-07 23:27:50.495676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:28.070 [2024-06-07 23:27:50.495687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:28.070 [2024-06-07 23:27:50.495697] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.070 [2024-06-07 23:27:50.495703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.070 [2024-06-07 23:27:50.495710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.070 [2024-06-07 23:27:50.495726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.070 [2024-06-07 23:27:50.504983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.070 [2024-06-07 23:27:50.505229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.505601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.505611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:28.070 [2024-06-07 23:27:50.505618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:28.070 [2024-06-07 23:27:50.505629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:28.070 [2024-06-07 23:27:50.505638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.070 [2024-06-07 23:27:50.505644] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.070 [2024-06-07 23:27:50.505651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.070 [2024-06-07 23:27:50.505661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.070 [2024-06-07 23:27:50.515034] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:28.070 [2024-06-07 23:27:50.515373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.515740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.070 [2024-06-07 23:27:50.515749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdafe0 with addr=10.0.0.2, port=4420 00:31:28.070 [2024-06-07 23:27:50.515756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdafe0 is same with the state(5) to be set 00:31:28.070 [2024-06-07 23:27:50.515766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdafe0 (9): Bad file descriptor 00:31:28.070 [2024-06-07 23:27:50.515782] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:28.070 [2024-06-07 23:27:50.515788] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:28.070 [2024-06-07 23:27:50.515798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:28.070 [2024-06-07 23:27:50.515808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:28.070 [2024-06-07 23:27:50.518849] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:28.070 [2024-06-07 23:27:50.518867] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:29.010 23:27:51 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:29.010 23:27:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.010 23:27:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.010 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.010 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:29.010 23:27:51 -- host/discovery.sh@59 -- # sort 00:31:29.010 23:27:51 -- host/discovery.sh@59 -- # xargs 00:31:29.010 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@129 -- # get_bdev_list 00:31:29.010 23:27:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.010 23:27:51 -- host/discovery.sh@55 -- # xargs 00:31:29.010 23:27:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.010 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.010 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:29.010 23:27:51 -- host/discovery.sh@55 -- # sort 00:31:29.010 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:29.010 23:27:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:29.010 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.010 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:29.010 23:27:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:29.010 23:27:51 -- host/discovery.sh@63 -- # sort -n 00:31:29.010 23:27:51 -- host/discovery.sh@63 -- # xargs 00:31:29.010 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@131 -- # get_notification_count 00:31:29.010 23:27:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:29.010 23:27:51 -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.010 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.010 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:29.010 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@74 -- # notification_count=0 00:31:29.010 23:27:51 -- host/discovery.sh@75 -- # notify_id=2 00:31:29.010 23:27:51 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:29.010 23:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.010 23:27:51 -- common/autotest_common.sh@10 -- # set +x 00:31:29.010 23:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.010 23:27:51 -- host/discovery.sh@135 -- # sleep 1 00:31:29.951 23:27:52 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:29.951 23:27:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:29.951 23:27:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:29.951 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.951 23:27:52 -- host/discovery.sh@59 -- # sort 00:31:29.951 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:31:29.951 23:27:52 -- host/discovery.sh@59 -- # xargs 00:31:30.212 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.212 23:27:52 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:30.212 23:27:52 -- host/discovery.sh@137 -- # get_bdev_list 00:31:30.212 23:27:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.212 23:27:52 -- host/discovery.sh@55 -- # xargs 00:31:30.212 23:27:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.212 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.212 23:27:52 -- host/discovery.sh@55 -- # sort 00:31:30.212 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:31:30.212 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.212 23:27:52 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:30.212 23:27:52 -- host/discovery.sh@138 -- # get_notification_count 00:31:30.212 23:27:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:30.212 23:27:52 -- host/discovery.sh@74 -- # jq '. | length' 00:31:30.212 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.212 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:31:30.212 23:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.212 23:27:52 -- host/discovery.sh@74 -- # notification_count=2 00:31:30.212 23:27:52 -- host/discovery.sh@75 -- # notify_id=4 00:31:30.212 23:27:52 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:30.212 23:27:52 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.212 23:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.212 23:27:52 -- common/autotest_common.sh@10 -- # set +x 00:31:31.597 [2024-06-07 23:27:53.837448] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:31.597 [2024-06-07 23:27:53.837465] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:31.597 [2024-06-07 23:27:53.837477] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.598 [2024-06-07 23:27:53.925765] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:31.598 [2024-06-07 23:27:54.194358] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.598 [2024-06-07 23:27:54.194389] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:31.598 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.598 23:27:54 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.598 23:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:31.598 23:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.598 23:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:31.598 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.598 23:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:31.598 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.598 23:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.598 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.598 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.598 request: 00:31:31.598 { 00:31:31.598 "name": "nvme", 00:31:31.598 "trtype": "tcp", 00:31:31.598 "traddr": "10.0.0.2", 00:31:31.598 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:31.598 "adrfam": "ipv4", 00:31:31.598 "trsvcid": "8009", 00:31:31.598 "wait_for_attach": true, 00:31:31.598 "method": "bdev_nvme_start_discovery", 00:31:31.598 "req_id": 1 00:31:31.598 } 00:31:31.598 Got JSON-RPC error response 00:31:31.598 response: 00:31:31.598 { 00:31:31.598 "code": -17, 00:31:31.598 "message": "File exists" 00:31:31.598 } 00:31:31.598 23:27:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:31.598 23:27:54 -- common/autotest_common.sh@643 -- # es=1 00:31:31.598 23:27:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:31.598 23:27:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:31.598 23:27:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:31.598 23:27:54 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:31.598 23:27:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:31.598 23:27:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:31.598 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.598 23:27:54 -- host/discovery.sh@67 -- # sort 00:31:31.598 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.598 23:27:54 -- host/discovery.sh@67 -- # xargs 00:31:31.598 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.598 23:27:54 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:31.598 23:27:54 -- host/discovery.sh@147 -- # get_bdev_list 00:31:31.598 23:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.598 23:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.598 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.598 23:27:54 -- host/discovery.sh@55 -- # sort 00:31:31.598 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.598 23:27:54 -- host/discovery.sh@55 -- # xargs 00:31:31.859 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.859 23:27:54 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.859 23:27:54 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.859 23:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:31.859 23:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.859 23:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.859 23:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:31.859 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.859 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.859 request: 00:31:31.859 { 00:31:31.859 "name": "nvme_second", 00:31:31.859 "trtype": "tcp", 00:31:31.859 "traddr": "10.0.0.2", 00:31:31.859 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:31.859 "adrfam": "ipv4", 00:31:31.859 "trsvcid": "8009", 00:31:31.859 "wait_for_attach": true, 00:31:31.859 "method": "bdev_nvme_start_discovery", 00:31:31.859 "req_id": 1 00:31:31.859 } 00:31:31.859 Got JSON-RPC error response 00:31:31.859 response: 00:31:31.859 { 00:31:31.859 "code": -17, 00:31:31.859 "message": "File exists" 00:31:31.859 } 00:31:31.859 23:27:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:31.859 23:27:54 -- common/autotest_common.sh@643 -- # es=1 00:31:31.859 23:27:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:31.859 23:27:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:31.859 23:27:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:31.859 23:27:54 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:31.859 23:27:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:31.859 23:27:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:31.859 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.859 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.859 23:27:54 -- host/discovery.sh@67 -- # sort 00:31:31.859 23:27:54 -- host/discovery.sh@67 -- # xargs 00:31:31.859 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.859 23:27:54 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:31.859 23:27:54 -- host/discovery.sh@153 -- # get_bdev_list 00:31:31.859 23:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.859 23:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:31.859 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.859 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:31.859 23:27:54 -- host/discovery.sh@55 -- # sort 00:31:31.859 23:27:54 -- host/discovery.sh@55 -- # xargs 00:31:31.859 23:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.859 23:27:54 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:31.859 23:27:54 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:31.859 23:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:31:31.859 23:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:31.859 23:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:31.859 23:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:31.859 23:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:31.859 23:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.859 23:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:32.799 [2024-06-07 23:27:55.441875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.799 [2024-06-07 23:27:55.442214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:32.799 [2024-06-07 23:27:55.442225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd8d90 with addr=10.0.0.2, port=8010 00:31:32.799 [2024-06-07 23:27:55.442236] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:32.799 [2024-06-07 23:27:55.442247] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:32.799 [2024-06-07 23:27:55.442254] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:34.182 [2024-06-07 23:27:56.444215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.182 [2024-06-07 23:27:56.444570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.182 [2024-06-07 23:27:56.444607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd8d90 with addr=10.0.0.2, port=8010 00:31:34.182 [2024-06-07 23:27:56.444623] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:34.182 [2024-06-07 23:27:56.444631] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:34.182 [2024-06-07 23:27:56.444640] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:35.125 [2024-06-07 23:27:57.446168] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:35.125 request: 00:31:35.125 { 00:31:35.125 "name": "nvme_second", 00:31:35.125 "trtype": "tcp", 00:31:35.125 "traddr": "10.0.0.2", 00:31:35.125 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:35.125 "adrfam": "ipv4", 00:31:35.125 "trsvcid": "8010", 00:31:35.125 "attach_timeout_ms": 3000, 00:31:35.125 "method": "bdev_nvme_start_discovery", 00:31:35.125 "req_id": 1 00:31:35.125 } 00:31:35.125 Got JSON-RPC error response 00:31:35.125 response: 00:31:35.125 { 00:31:35.125 "code": -110, 00:31:35.125 "message": "Connection timed out" 00:31:35.125 } 00:31:35.125 23:27:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:35.125 23:27:57 -- common/autotest_common.sh@643 -- # es=1 00:31:35.125 23:27:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:35.125 23:27:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:35.125 23:27:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:35.125 23:27:57 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:35.125 23:27:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:35.125 23:27:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:35.125 23:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.125 23:27:57 -- host/discovery.sh@67 -- # sort 00:31:35.125 23:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:35.125 23:27:57 -- host/discovery.sh@67 -- # xargs 00:31:35.125 23:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.125 23:27:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:35.125 23:27:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:35.125 23:27:57 -- host/discovery.sh@162 -- # kill 3025293 00:31:35.125 23:27:57 -- host/discovery.sh@163 -- # nvmftestfini 00:31:35.125 23:27:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:35.125 23:27:57 -- nvmf/common.sh@116 -- # sync 00:31:35.125 23:27:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:35.125 23:27:57 -- nvmf/common.sh@119 -- # set +e 00:31:35.125 23:27:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:35.125 23:27:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:35.125 rmmod nvme_tcp 00:31:35.125 rmmod nvme_fabrics 00:31:35.125 rmmod nvme_keyring 00:31:35.125 23:27:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:35.125 23:27:57 -- nvmf/common.sh@123 -- # set -e 00:31:35.125 23:27:57 -- nvmf/common.sh@124 -- # return 0 00:31:35.125 23:27:57 -- nvmf/common.sh@477 -- # '[' -n 3025087 ']' 00:31:35.125 23:27:57 -- nvmf/common.sh@478 -- # killprocess 3025087 00:31:35.125 23:27:57 -- common/autotest_common.sh@926 -- # '[' -z 3025087 ']' 00:31:35.125 23:27:57 -- common/autotest_common.sh@930 -- # kill -0 3025087 00:31:35.125 23:27:57 -- common/autotest_common.sh@931 -- # uname 00:31:35.125 23:27:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:35.125 23:27:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3025087 00:31:35.125 23:27:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:35.125 23:27:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:35.125 23:27:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3025087' 00:31:35.125 killing process with pid 3025087 00:31:35.125 23:27:57 -- common/autotest_common.sh@945 -- # kill 3025087 00:31:35.125 23:27:57 -- common/autotest_common.sh@950 -- # wait 3025087 00:31:35.125 23:27:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:35.125 23:27:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:35.125 23:27:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:35.125 23:27:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.125 23:27:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:35.125 23:27:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.125 23:27:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.125 23:27:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.670 23:27:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:37.670 00:31:37.670 real 0m22.545s 00:31:37.670 user 0m28.691s 00:31:37.670 sys 0m6.636s 00:31:37.670 23:27:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.670 23:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:37.670 ************************************ 00:31:37.670 END TEST nvmf_discovery 00:31:37.670 ************************************ 00:31:37.670 23:27:59 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:37.670 23:27:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:37.670 23:27:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.670 23:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:37.670 ************************************ 00:31:37.670 START TEST nvmf_discovery_remove_ifc 00:31:37.670 ************************************ 00:31:37.670 23:27:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:37.670 * Looking for test storage... 00:31:37.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.670 23:27:59 -- nvmf/common.sh@7 -- # uname -s 00:31:37.670 23:27:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.670 23:27:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.670 23:27:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.670 23:27:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.670 23:27:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.670 23:27:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.670 23:27:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.670 23:27:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.670 23:27:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.670 23:27:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.670 23:27:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:37.670 23:27:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:37.670 23:27:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.670 23:27:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.670 23:27:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.670 23:27:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.670 23:27:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.670 23:27:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.670 23:27:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.670 23:27:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.670 23:27:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.670 23:27:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.670 23:27:59 -- paths/export.sh@5 -- # export PATH 00:31:37.670 23:27:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.670 23:27:59 -- nvmf/common.sh@46 -- # : 0 00:31:37.670 23:27:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:37.670 23:27:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:37.670 23:27:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:37.670 23:27:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.670 23:27:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.670 23:27:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:37.670 23:27:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:37.670 23:27:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:37.670 23:27:59 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:37.670 23:27:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:37.670 23:27:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.670 23:27:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:37.670 23:27:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:37.670 23:27:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:37.670 23:27:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.670 23:27:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.670 23:27:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.670 23:27:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:37.670 23:27:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:37.670 23:27:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:37.670 23:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:44.259 23:28:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:44.259 23:28:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:44.259 23:28:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:44.259 23:28:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:44.259 23:28:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:44.259 23:28:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:44.259 23:28:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:44.259 23:28:06 -- nvmf/common.sh@294 -- # net_devs=() 00:31:44.259 23:28:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:44.259 23:28:06 -- nvmf/common.sh@295 -- # e810=() 00:31:44.259 23:28:06 -- nvmf/common.sh@295 -- # local -ga e810 00:31:44.259 23:28:06 -- nvmf/common.sh@296 -- # x722=() 00:31:44.259 23:28:06 -- nvmf/common.sh@296 -- # local -ga x722 00:31:44.259 23:28:06 -- nvmf/common.sh@297 -- # mlx=() 00:31:44.259 23:28:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:44.259 23:28:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.259 23:28:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:44.259 23:28:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:44.259 23:28:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:44.259 23:28:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:44.259 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:44.259 23:28:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:44.259 23:28:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:44.259 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:44.259 23:28:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:44.259 23:28:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.259 23:28:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.259 23:28:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:44.259 Found net devices under 0000:31:00.0: cvl_0_0 00:31:44.259 23:28:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.259 23:28:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:44.259 23:28:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.259 23:28:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.259 23:28:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:44.259 Found net devices under 0000:31:00.1: cvl_0_1 00:31:44.259 23:28:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.259 23:28:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:44.259 23:28:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:44.259 23:28:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:44.259 23:28:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.259 23:28:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.259 23:28:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.259 23:28:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:44.259 23:28:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.259 23:28:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.259 23:28:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:44.259 23:28:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.259 23:28:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.259 23:28:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:44.259 23:28:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:44.259 23:28:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.259 23:28:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.521 23:28:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.521 23:28:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.521 23:28:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:44.521 23:28:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.521 23:28:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.521 23:28:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.521 23:28:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:44.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:31:44.521 00:31:44.521 --- 10.0.0.2 ping statistics --- 00:31:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.521 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:31:44.521 23:28:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:31:44.521 00:31:44.521 --- 10.0.0.1 ping statistics --- 00:31:44.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.521 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:44.521 23:28:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.521 23:28:07 -- nvmf/common.sh@410 -- # return 0 00:31:44.521 23:28:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:44.521 23:28:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.521 23:28:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:44.521 23:28:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:44.521 23:28:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.521 23:28:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:44.521 23:28:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:44.521 23:28:07 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:44.521 23:28:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:44.521 23:28:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:44.521 23:28:07 -- common/autotest_common.sh@10 -- # set +x 00:31:44.521 23:28:07 -- nvmf/common.sh@469 -- # nvmfpid=3031993 00:31:44.521 23:28:07 -- nvmf/common.sh@470 -- # waitforlisten 3031993 00:31:44.521 23:28:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:44.521 23:28:07 -- common/autotest_common.sh@819 -- # '[' -z 3031993 ']' 00:31:44.521 23:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.521 23:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:44.521 23:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.521 23:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:44.521 23:28:07 -- common/autotest_common.sh@10 -- # set +x 00:31:44.782 [2024-06-07 23:28:07.214624] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:44.782 [2024-06-07 23:28:07.214706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.782 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.782 [2024-06-07 23:28:07.304713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.782 [2024-06-07 23:28:07.348384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:44.782 [2024-06-07 23:28:07.348529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.782 [2024-06-07 23:28:07.348539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.782 [2024-06-07 23:28:07.348553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.782 [2024-06-07 23:28:07.348576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.354 23:28:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:45.354 23:28:07 -- common/autotest_common.sh@852 -- # return 0 00:31:45.354 23:28:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:45.354 23:28:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:45.354 23:28:07 -- common/autotest_common.sh@10 -- # set +x 00:31:45.354 23:28:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.354 23:28:08 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:45.354 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.354 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 [2024-06-07 23:28:08.037699] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.615 [2024-06-07 23:28:08.045972] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:45.615 null0 00:31:45.615 [2024-06-07 23:28:08.077895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.615 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.615 23:28:08 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3032075 00:31:45.615 23:28:08 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3032075 /tmp/host.sock 00:31:45.615 23:28:08 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:45.615 23:28:08 -- common/autotest_common.sh@819 -- # '[' -z 3032075 ']' 00:31:45.615 23:28:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:45.615 23:28:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:45.615 23:28:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:45.615 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:45.615 23:28:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:45.615 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 [2024-06-07 23:28:08.149319] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:45.615 [2024-06-07 23:28:08.149385] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032075 ] 00:31:45.615 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.615 [2024-06-07 23:28:08.216518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.615 [2024-06-07 23:28:08.252955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:45.615 [2024-06-07 23:28:08.253102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.558 23:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:46.558 23:28:08 -- common/autotest_common.sh@852 -- # return 0 00:31:46.558 23:28:08 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.558 23:28:08 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:46.558 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.558 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:31:46.558 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.558 23:28:08 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:46.558 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.558 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:31:46.558 23:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.558 23:28:08 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:46.558 23:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.558 23:28:08 -- common/autotest_common.sh@10 -- # set +x 00:31:47.501 [2024-06-07 23:28:10.038478] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:47.501 [2024-06-07 23:28:10.038508] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:47.501 [2024-06-07 23:28:10.038527] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:47.501 [2024-06-07 23:28:10.126792] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:47.761 [2024-06-07 23:28:10.309697] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:47.761 [2024-06-07 23:28:10.309750] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:47.761 [2024-06-07 23:28:10.309773] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:47.761 [2024-06-07 23:28:10.309787] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:47.761 [2024-06-07 23:28:10.309811] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:47.761 23:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.761 [2024-06-07 23:28:10.315574] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bc1870 was disconnected and freed. delete nvme_qpair. 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.761 23:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.761 23:28:10 -- common/autotest_common.sh@10 -- # set +x 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.761 23:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:47.761 23:28:10 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.021 23:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.021 23:28:10 -- common/autotest_common.sh@10 -- # set +x 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.021 23:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.021 23:28:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.960 23:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.960 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.960 23:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.960 23:28:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.343 23:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.343 23:28:12 -- common/autotest_common.sh@10 -- # set +x 00:31:50.343 23:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:50.343 23:28:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.281 23:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.281 23:28:13 -- common/autotest_common.sh@10 -- # set +x 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.281 23:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:51.281 23:28:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.284 23:28:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.284 23:28:14 -- common/autotest_common.sh@10 -- # set +x 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.284 23:28:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.284 23:28:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.226 [2024-06-07 23:28:15.750091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:53.226 [2024-06-07 23:28:15.750132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.226 [2024-06-07 23:28:15.750144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.226 [2024-06-07 23:28:15.750153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.226 [2024-06-07 23:28:15.750160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.226 [2024-06-07 23:28:15.750169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.226 [2024-06-07 23:28:15.750176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.226 [2024-06-07 23:28:15.750184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.226 [2024-06-07 23:28:15.750191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.226 [2024-06-07 23:28:15.750200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.226 [2024-06-07 23:28:15.750207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.226 [2024-06-07 23:28:15.750214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87cd0 is same with the state(5) to be set 00:31:53.226 [2024-06-07 23:28:15.760112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87cd0 (9): Bad file descriptor 00:31:53.226 [2024-06-07 23:28:15.770155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:53.226 23:28:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.226 23:28:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.226 23:28:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.226 23:28:15 -- common/autotest_common.sh@10 -- # set +x 00:31:53.226 23:28:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.227 23:28:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.227 23:28:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.168 [2024-06-07 23:28:16.817270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:55.554 [2024-06-07 23:28:17.841312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:55.554 [2024-06-07 23:28:17.841351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b87cd0 with addr=10.0.0.2, port=4420 00:31:55.554 [2024-06-07 23:28:17.841366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87cd0 is same with the state(5) to be set 00:31:55.554 [2024-06-07 23:28:17.841724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87cd0 (9): Bad file descriptor 00:31:55.554 [2024-06-07 23:28:17.841746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.554 [2024-06-07 23:28:17.841767] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:55.554 [2024-06-07 23:28:17.841790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.554 [2024-06-07 23:28:17.841800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.554 [2024-06-07 23:28:17.841811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.554 [2024-06-07 23:28:17.841819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.554 [2024-06-07 23:28:17.841826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.554 [2024-06-07 23:28:17.841834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.554 [2024-06-07 23:28:17.841841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.554 [2024-06-07 23:28:17.841848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.554 [2024-06-07 23:28:17.841856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.554 [2024-06-07 23:28:17.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.554 [2024-06-07 23:28:17.841870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:55.554 [2024-06-07 23:28:17.842393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b880e0 (9): Bad file descriptor 00:31:55.554 [2024-06-07 23:28:17.843405] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:55.554 [2024-06-07 23:28:17.843415] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:55.554 23:28:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.554 23:28:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:55.554 23:28:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.497 23:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.497 23:28:18 -- common/autotest_common.sh@10 -- # set +x 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.497 23:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.497 23:28:18 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.497 23:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.497 23:28:19 -- common/autotest_common.sh@10 -- # set +x 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.497 23:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:56.497 23:28:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.439 [2024-06-07 23:28:19.902419] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:57.439 [2024-06-07 23:28:19.902439] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:57.439 [2024-06-07 23:28:19.902453] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:57.439 [2024-06-07 23:28:20.029845] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.439 23:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.439 23:28:20 -- common/autotest_common.sh@10 -- # set +x 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.439 23:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:57.439 23:28:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.699 [2024-06-07 23:28:20.132963] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:57.699 [2024-06-07 23:28:20.133008] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:57.699 [2024-06-07 23:28:20.133029] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:57.699 [2024-06-07 23:28:20.133043] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:57.699 [2024-06-07 23:28:20.133051] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:57.699 [2024-06-07 23:28:20.139996] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bcc110 was disconnected and freed. delete nvme_qpair. 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.643 23:28:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:58.643 23:28:21 -- common/autotest_common.sh@10 -- # set +x 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.643 23:28:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:58.643 23:28:21 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3032075 00:31:58.643 23:28:21 -- common/autotest_common.sh@926 -- # '[' -z 3032075 ']' 00:31:58.643 23:28:21 -- common/autotest_common.sh@930 -- # kill -0 3032075 00:31:58.643 23:28:21 -- common/autotest_common.sh@931 -- # uname 00:31:58.643 23:28:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:58.643 23:28:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3032075 00:31:58.643 23:28:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:58.643 23:28:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:58.643 23:28:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3032075' 00:31:58.643 killing process with pid 3032075 00:31:58.643 23:28:21 -- common/autotest_common.sh@945 -- # kill 3032075 00:31:58.643 23:28:21 -- common/autotest_common.sh@950 -- # wait 3032075 00:31:58.904 23:28:21 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:58.904 23:28:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:58.904 23:28:21 -- nvmf/common.sh@116 -- # sync 00:31:58.904 23:28:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:58.904 23:28:21 -- nvmf/common.sh@119 -- # set +e 00:31:58.904 23:28:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:58.904 23:28:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:58.904 rmmod nvme_tcp 00:31:58.904 rmmod nvme_fabrics 00:31:58.904 rmmod nvme_keyring 00:31:58.904 23:28:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:58.904 23:28:21 -- nvmf/common.sh@123 -- # set -e 00:31:58.904 23:28:21 -- nvmf/common.sh@124 -- # return 0 00:31:58.904 23:28:21 -- nvmf/common.sh@477 -- # '[' -n 3031993 ']' 00:31:58.904 23:28:21 -- nvmf/common.sh@478 -- # killprocess 3031993 00:31:58.904 23:28:21 -- common/autotest_common.sh@926 -- # '[' -z 3031993 ']' 00:31:58.904 23:28:21 -- common/autotest_common.sh@930 -- # kill -0 3031993 00:31:58.904 23:28:21 -- common/autotest_common.sh@931 -- # uname 00:31:58.904 23:28:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:58.904 23:28:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3031993 00:31:58.904 23:28:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:58.904 23:28:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:58.904 23:28:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3031993' 00:31:58.904 killing process with pid 3031993 00:31:58.904 23:28:21 -- common/autotest_common.sh@945 -- # kill 3031993 00:31:58.904 23:28:21 -- common/autotest_common.sh@950 -- # wait 3031993 00:31:58.904 23:28:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:58.904 23:28:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:58.904 23:28:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:58.904 23:28:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.904 23:28:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:58.904 23:28:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.904 23:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.904 23:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.450 23:28:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:01.450 00:32:01.450 real 0m23.767s 00:32:01.450 user 0m28.024s 00:32:01.450 sys 0m6.509s 00:32:01.450 23:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.450 23:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:01.450 ************************************ 00:32:01.450 END TEST nvmf_discovery_remove_ifc 00:32:01.450 ************************************ 00:32:01.450 23:28:23 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:32:01.450 23:28:23 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:01.450 23:28:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:01.450 23:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:01.450 23:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:01.450 ************************************ 00:32:01.450 START TEST nvmf_digest 00:32:01.450 ************************************ 00:32:01.450 23:28:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:01.450 * Looking for test storage... 00:32:01.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:01.450 23:28:23 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.450 23:28:23 -- nvmf/common.sh@7 -- # uname -s 00:32:01.450 23:28:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.450 23:28:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.450 23:28:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.450 23:28:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.450 23:28:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.450 23:28:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.450 23:28:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.450 23:28:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.450 23:28:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.450 23:28:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.450 23:28:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.450 23:28:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:01.450 23:28:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.450 23:28:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.450 23:28:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.450 23:28:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.450 23:28:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.450 23:28:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.450 23:28:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.450 23:28:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.450 23:28:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.450 23:28:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.450 23:28:23 -- paths/export.sh@5 -- # export PATH 00:32:01.450 23:28:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.450 23:28:23 -- nvmf/common.sh@46 -- # : 0 00:32:01.450 23:28:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:01.450 23:28:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:01.450 23:28:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:01.450 23:28:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.450 23:28:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.450 23:28:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:01.450 23:28:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:01.450 23:28:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:01.450 23:28:23 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:01.450 23:28:23 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:01.450 23:28:23 -- host/digest.sh@16 -- # runtime=2 00:32:01.450 23:28:23 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:32:01.450 23:28:23 -- host/digest.sh@132 -- # nvmftestinit 00:32:01.450 23:28:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:01.450 23:28:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.450 23:28:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:01.450 23:28:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:01.450 23:28:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:01.450 23:28:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.450 23:28:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.450 23:28:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.450 23:28:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:01.450 23:28:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:01.450 23:28:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:01.450 23:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:08.035 23:28:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:08.035 23:28:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:08.035 23:28:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:08.035 23:28:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:08.035 23:28:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:08.035 23:28:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:08.035 23:28:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:08.035 23:28:30 -- nvmf/common.sh@294 -- # net_devs=() 00:32:08.035 23:28:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:08.035 23:28:30 -- nvmf/common.sh@295 -- # e810=() 00:32:08.035 23:28:30 -- nvmf/common.sh@295 -- # local -ga e810 00:32:08.035 23:28:30 -- nvmf/common.sh@296 -- # x722=() 00:32:08.035 23:28:30 -- nvmf/common.sh@296 -- # local -ga x722 00:32:08.035 23:28:30 -- nvmf/common.sh@297 -- # mlx=() 00:32:08.035 23:28:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:08.035 23:28:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.035 23:28:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:08.035 23:28:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:08.035 23:28:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.035 23:28:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.035 23:28:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.035 23:28:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.035 23:28:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.035 23:28:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.035 23:28:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.035 23:28:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.035 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.035 23:28:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.035 23:28:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.035 23:28:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.035 23:28:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.035 23:28:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.035 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.035 23:28:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.035 23:28:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:08.035 23:28:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:08.035 23:28:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:08.035 23:28:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.035 23:28:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.035 23:28:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.035 23:28:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:08.035 23:28:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.035 23:28:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.035 23:28:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:08.035 23:28:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.035 23:28:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.035 23:28:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:08.035 23:28:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:08.035 23:28:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.035 23:28:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.296 23:28:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.296 23:28:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.297 23:28:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:08.297 23:28:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.297 23:28:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.297 23:28:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.297 23:28:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:08.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:32:08.297 00:32:08.297 --- 10.0.0.2 ping statistics --- 00:32:08.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.297 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:32:08.297 23:28:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:32:08.297 00:32:08.297 --- 10.0.0.1 ping statistics --- 00:32:08.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.297 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:08.297 23:28:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.297 23:28:30 -- nvmf/common.sh@410 -- # return 0 00:32:08.297 23:28:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:08.297 23:28:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.297 23:28:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:08.297 23:28:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:08.297 23:28:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.297 23:28:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:08.297 23:28:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:08.297 23:28:30 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:08.297 23:28:30 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:08.297 23:28:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:08.297 23:28:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:08.297 23:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:08.297 ************************************ 00:32:08.297 START TEST nvmf_digest_clean 00:32:08.297 ************************************ 00:32:08.297 23:28:30 -- common/autotest_common.sh@1104 -- # run_digest 00:32:08.297 23:28:30 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:08.297 23:28:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:08.297 23:28:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:08.297 23:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:08.557 23:28:30 -- nvmf/common.sh@469 -- # nvmfpid=3038931 00:32:08.557 23:28:30 -- nvmf/common.sh@470 -- # waitforlisten 3038931 00:32:08.557 23:28:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:08.557 23:28:30 -- common/autotest_common.sh@819 -- # '[' -z 3038931 ']' 00:32:08.557 23:28:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.557 23:28:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:08.557 23:28:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.557 23:28:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:08.557 23:28:30 -- common/autotest_common.sh@10 -- # set +x 00:32:08.558 [2024-06-07 23:28:31.036205] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:08.558 [2024-06-07 23:28:31.036266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.558 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.558 [2024-06-07 23:28:31.106015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.558 [2024-06-07 23:28:31.142385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:08.558 [2024-06-07 23:28:31.142512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.558 [2024-06-07 23:28:31.142520] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.558 [2024-06-07 23:28:31.142528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.558 [2024-06-07 23:28:31.142546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.130 23:28:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.130 23:28:31 -- common/autotest_common.sh@852 -- # return 0 00:32:09.130 23:28:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:09.131 23:28:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:09.131 23:28:31 -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 23:28:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.392 23:28:31 -- host/digest.sh@120 -- # common_target_config 00:32:09.392 23:28:31 -- host/digest.sh@43 -- # rpc_cmd 00:32:09.392 23:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.392 23:28:31 -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 null0 00:32:09.392 [2024-06-07 23:28:31.895240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.392 [2024-06-07 23:28:31.919430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.392 23:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.392 23:28:31 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:09.392 23:28:31 -- host/digest.sh@77 -- # local rw bs qd 00:32:09.392 23:28:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:09.392 23:28:31 -- host/digest.sh@80 -- # rw=randread 00:32:09.392 23:28:31 -- host/digest.sh@80 -- # bs=4096 00:32:09.392 23:28:31 -- host/digest.sh@80 -- # qd=128 00:32:09.392 23:28:31 -- host/digest.sh@82 -- # bperfpid=3039127 00:32:09.392 23:28:31 -- host/digest.sh@83 -- # waitforlisten 3039127 /var/tmp/bperf.sock 00:32:09.392 23:28:31 -- common/autotest_common.sh@819 -- # '[' -z 3039127 ']' 00:32:09.392 23:28:31 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:09.392 23:28:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:09.392 23:28:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:09.392 23:28:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:09.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:09.392 23:28:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:09.392 23:28:31 -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 [2024-06-07 23:28:31.981186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:09.392 [2024-06-07 23:28:31.981251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039127 ] 00:32:09.392 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.392 [2024-06-07 23:28:32.058545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.654 [2024-06-07 23:28:32.087772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.225 23:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:10.225 23:28:32 -- common/autotest_common.sh@852 -- # return 0 00:32:10.225 23:28:32 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:10.225 23:28:32 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:10.225 23:28:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:10.492 23:28:32 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.492 23:28:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.755 nvme0n1 00:32:10.755 23:28:33 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:10.755 23:28:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.755 Running I/O for 2 seconds... 00:32:13.296 00:32:13.296 Latency(us) 00:32:13.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.296 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:13.296 nvme0n1 : 2.00 18856.86 73.66 0.00 0.00 6780.81 2443.95 17367.04 00:32:13.296 =================================================================================================================== 00:32:13.296 Total : 18856.86 73.66 0.00 0.00 6780.81 2443.95 17367.04 00:32:13.296 0 00:32:13.296 23:28:35 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:13.296 23:28:35 -- host/digest.sh@92 -- # get_accel_stats 00:32:13.296 23:28:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:13.296 23:28:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:13.296 | select(.opcode=="crc32c") 00:32:13.296 | "\(.module_name) \(.executed)"' 00:32:13.296 23:28:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:13.296 23:28:35 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:13.296 23:28:35 -- host/digest.sh@93 -- # exp_module=software 00:32:13.296 23:28:35 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:13.296 23:28:35 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:13.296 23:28:35 -- host/digest.sh@97 -- # killprocess 3039127 00:32:13.296 23:28:35 -- common/autotest_common.sh@926 -- # '[' -z 3039127 ']' 00:32:13.296 23:28:35 -- common/autotest_common.sh@930 -- # kill -0 3039127 00:32:13.296 23:28:35 -- common/autotest_common.sh@931 -- # uname 00:32:13.296 23:28:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:13.296 23:28:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3039127 00:32:13.296 23:28:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:13.296 23:28:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:13.296 23:28:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3039127' 00:32:13.296 killing process with pid 3039127 00:32:13.296 23:28:35 -- common/autotest_common.sh@945 -- # kill 3039127 00:32:13.296 Received shutdown signal, test time was about 2.000000 seconds 00:32:13.296 00:32:13.296 Latency(us) 00:32:13.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.296 =================================================================================================================== 00:32:13.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:13.296 23:28:35 -- common/autotest_common.sh@950 -- # wait 3039127 00:32:13.296 23:28:35 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:13.296 23:28:35 -- host/digest.sh@77 -- # local rw bs qd 00:32:13.296 23:28:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:13.296 23:28:35 -- host/digest.sh@80 -- # rw=randread 00:32:13.296 23:28:35 -- host/digest.sh@80 -- # bs=131072 00:32:13.296 23:28:35 -- host/digest.sh@80 -- # qd=16 00:32:13.296 23:28:35 -- host/digest.sh@82 -- # bperfpid=3039896 00:32:13.296 23:28:35 -- host/digest.sh@83 -- # waitforlisten 3039896 /var/tmp/bperf.sock 00:32:13.296 23:28:35 -- common/autotest_common.sh@819 -- # '[' -z 3039896 ']' 00:32:13.296 23:28:35 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:13.296 23:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.296 23:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:13.296 23:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.296 23:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:13.296 23:28:35 -- common/autotest_common.sh@10 -- # set +x 00:32:13.296 [2024-06-07 23:28:35.731585] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:13.296 [2024-06-07 23:28:35.731657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039896 ] 00:32:13.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:13.296 Zero copy mechanism will not be used. 00:32:13.296 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.297 [2024-06-07 23:28:35.811346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.297 [2024-06-07 23:28:35.840263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.868 23:28:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:13.868 23:28:36 -- common/autotest_common.sh@852 -- # return 0 00:32:13.868 23:28:36 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:13.868 23:28:36 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:13.868 23:28:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:14.129 23:28:36 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.129 23:28:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.390 nvme0n1 00:32:14.391 23:28:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:14.391 23:28:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:14.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:14.651 Zero copy mechanism will not be used. 00:32:14.651 Running I/O for 2 seconds... 00:32:16.566 00:32:16.566 Latency(us) 00:32:16.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.566 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:16.566 nvme0n1 : 2.01 2977.93 372.24 0.00 0.00 5370.00 1174.19 12124.16 00:32:16.566 =================================================================================================================== 00:32:16.566 Total : 2977.93 372.24 0.00 0.00 5370.00 1174.19 12124.16 00:32:16.566 0 00:32:16.566 23:28:39 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:16.566 23:28:39 -- host/digest.sh@92 -- # get_accel_stats 00:32:16.566 23:28:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:16.566 23:28:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:16.566 | select(.opcode=="crc32c") 00:32:16.566 | "\(.module_name) \(.executed)"' 00:32:16.566 23:28:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:16.828 23:28:39 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:16.828 23:28:39 -- host/digest.sh@93 -- # exp_module=software 00:32:16.828 23:28:39 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:16.828 23:28:39 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.828 23:28:39 -- host/digest.sh@97 -- # killprocess 3039896 00:32:16.828 23:28:39 -- common/autotest_common.sh@926 -- # '[' -z 3039896 ']' 00:32:16.828 23:28:39 -- common/autotest_common.sh@930 -- # kill -0 3039896 00:32:16.828 23:28:39 -- common/autotest_common.sh@931 -- # uname 00:32:16.828 23:28:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:16.828 23:28:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3039896 00:32:16.828 23:28:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:16.828 23:28:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:16.828 23:28:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3039896' 00:32:16.828 killing process with pid 3039896 00:32:16.828 23:28:39 -- common/autotest_common.sh@945 -- # kill 3039896 00:32:16.828 Received shutdown signal, test time was about 2.000000 seconds 00:32:16.828 00:32:16.828 Latency(us) 00:32:16.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.828 =================================================================================================================== 00:32:16.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.828 23:28:39 -- common/autotest_common.sh@950 -- # wait 3039896 00:32:16.828 23:28:39 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:16.828 23:28:39 -- host/digest.sh@77 -- # local rw bs qd 00:32:16.828 23:28:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:16.828 23:28:39 -- host/digest.sh@80 -- # rw=randwrite 00:32:16.828 23:28:39 -- host/digest.sh@80 -- # bs=4096 00:32:16.828 23:28:39 -- host/digest.sh@80 -- # qd=128 00:32:16.828 23:28:39 -- host/digest.sh@82 -- # bperfpid=3040662 00:32:16.828 23:28:39 -- host/digest.sh@83 -- # waitforlisten 3040662 /var/tmp/bperf.sock 00:32:16.828 23:28:39 -- common/autotest_common.sh@819 -- # '[' -z 3040662 ']' 00:32:16.828 23:28:39 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:16.828 23:28:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:16.828 23:28:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:16.828 23:28:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:16.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:16.828 23:28:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:16.828 23:28:39 -- common/autotest_common.sh@10 -- # set +x 00:32:16.828 [2024-06-07 23:28:39.485704] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:16.828 [2024-06-07 23:28:39.485759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040662 ] 00:32:17.088 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.088 [2024-06-07 23:28:39.560624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.088 [2024-06-07 23:28:39.587240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.658 23:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:17.658 23:28:40 -- common/autotest_common.sh@852 -- # return 0 00:32:17.658 23:28:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:17.658 23:28:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:17.658 23:28:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:17.918 23:28:40 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.918 23:28:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.179 nvme0n1 00:32:18.179 23:28:40 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:18.179 23:28:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:18.440 Running I/O for 2 seconds... 00:32:20.355 00:32:20.355 Latency(us) 00:32:20.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.355 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.355 nvme0n1 : 2.00 22683.08 88.61 0.00 0.00 5638.37 2498.56 17039.36 00:32:20.355 =================================================================================================================== 00:32:20.355 Total : 22683.08 88.61 0.00 0.00 5638.37 2498.56 17039.36 00:32:20.355 0 00:32:20.355 23:28:42 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:20.355 23:28:42 -- host/digest.sh@92 -- # get_accel_stats 00:32:20.355 23:28:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:20.355 23:28:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:20.355 23:28:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:20.355 | select(.opcode=="crc32c") 00:32:20.355 | "\(.module_name) \(.executed)"' 00:32:20.615 23:28:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:20.615 23:28:43 -- host/digest.sh@93 -- # exp_module=software 00:32:20.615 23:28:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:20.615 23:28:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:20.615 23:28:43 -- host/digest.sh@97 -- # killprocess 3040662 00:32:20.615 23:28:43 -- common/autotest_common.sh@926 -- # '[' -z 3040662 ']' 00:32:20.615 23:28:43 -- common/autotest_common.sh@930 -- # kill -0 3040662 00:32:20.615 23:28:43 -- common/autotest_common.sh@931 -- # uname 00:32:20.615 23:28:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:20.615 23:28:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3040662 00:32:20.615 23:28:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:20.615 23:28:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:20.615 23:28:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3040662' 00:32:20.615 killing process with pid 3040662 00:32:20.615 23:28:43 -- common/autotest_common.sh@945 -- # kill 3040662 00:32:20.615 Received shutdown signal, test time was about 2.000000 seconds 00:32:20.615 00:32:20.615 Latency(us) 00:32:20.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.615 =================================================================================================================== 00:32:20.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:20.615 23:28:43 -- common/autotest_common.sh@950 -- # wait 3040662 00:32:20.615 23:28:43 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:20.615 23:28:43 -- host/digest.sh@77 -- # local rw bs qd 00:32:20.615 23:28:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:20.615 23:28:43 -- host/digest.sh@80 -- # rw=randwrite 00:32:20.615 23:28:43 -- host/digest.sh@80 -- # bs=131072 00:32:20.615 23:28:43 -- host/digest.sh@80 -- # qd=16 00:32:20.615 23:28:43 -- host/digest.sh@82 -- # bperfpid=3041356 00:32:20.615 23:28:43 -- host/digest.sh@83 -- # waitforlisten 3041356 /var/tmp/bperf.sock 00:32:20.615 23:28:43 -- common/autotest_common.sh@819 -- # '[' -z 3041356 ']' 00:32:20.615 23:28:43 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:20.615 23:28:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.615 23:28:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:20.615 23:28:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.615 23:28:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:20.615 23:28:43 -- common/autotest_common.sh@10 -- # set +x 00:32:20.875 [2024-06-07 23:28:43.305697] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:20.875 [2024-06-07 23:28:43.305760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041356 ] 00:32:20.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:20.875 Zero copy mechanism will not be used. 00:32:20.875 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.875 [2024-06-07 23:28:43.380140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.875 [2024-06-07 23:28:43.406878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.445 23:28:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:21.445 23:28:44 -- common/autotest_common.sh@852 -- # return 0 00:32:21.445 23:28:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:21.445 23:28:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:21.445 23:28:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:21.705 23:28:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.705 23:28:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.966 nvme0n1 00:32:21.966 23:28:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:21.966 23:28:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:21.966 Zero copy mechanism will not be used. 00:32:21.966 Running I/O for 2 seconds... 00:32:24.509 00:32:24.509 Latency(us) 00:32:24.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.509 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:24.509 nvme0n1 : 2.01 4270.25 533.78 0.00 0.00 3739.88 1508.69 15947.09 00:32:24.509 =================================================================================================================== 00:32:24.509 Total : 4270.25 533.78 0.00 0.00 3739.88 1508.69 15947.09 00:32:24.509 0 00:32:24.509 23:28:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:24.509 23:28:46 -- host/digest.sh@92 -- # get_accel_stats 00:32:24.509 23:28:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:24.509 23:28:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:24.509 23:28:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:24.509 | select(.opcode=="crc32c") 00:32:24.509 | "\(.module_name) \(.executed)"' 00:32:24.509 23:28:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:24.509 23:28:46 -- host/digest.sh@93 -- # exp_module=software 00:32:24.509 23:28:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:24.509 23:28:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:24.509 23:28:46 -- host/digest.sh@97 -- # killprocess 3041356 00:32:24.509 23:28:46 -- common/autotest_common.sh@926 -- # '[' -z 3041356 ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@930 -- # kill -0 3041356 00:32:24.509 23:28:46 -- common/autotest_common.sh@931 -- # uname 00:32:24.509 23:28:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3041356 00:32:24.509 23:28:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:24.509 23:28:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3041356' 00:32:24.509 killing process with pid 3041356 00:32:24.509 23:28:46 -- common/autotest_common.sh@945 -- # kill 3041356 00:32:24.509 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.509 00:32:24.509 Latency(us) 00:32:24.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.509 =================================================================================================================== 00:32:24.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.509 23:28:46 -- common/autotest_common.sh@950 -- # wait 3041356 00:32:24.509 23:28:46 -- host/digest.sh@126 -- # killprocess 3038931 00:32:24.509 23:28:46 -- common/autotest_common.sh@926 -- # '[' -z 3038931 ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@930 -- # kill -0 3038931 00:32:24.509 23:28:46 -- common/autotest_common.sh@931 -- # uname 00:32:24.509 23:28:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3038931 00:32:24.509 23:28:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:24.509 23:28:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:24.509 23:28:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3038931' 00:32:24.509 killing process with pid 3038931 00:32:24.509 23:28:46 -- common/autotest_common.sh@945 -- # kill 3038931 00:32:24.509 23:28:46 -- common/autotest_common.sh@950 -- # wait 3038931 00:32:24.509 00:32:24.509 real 0m16.139s 00:32:24.509 user 0m31.186s 00:32:24.509 sys 0m3.483s 00:32:24.509 23:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.509 23:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:24.509 ************************************ 00:32:24.509 END TEST nvmf_digest_clean 00:32:24.509 ************************************ 00:32:24.509 23:28:47 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:24.509 23:28:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:24.509 23:28:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:24.509 23:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:24.509 ************************************ 00:32:24.509 START TEST nvmf_digest_error 00:32:24.509 ************************************ 00:32:24.509 23:28:47 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:24.509 23:28:47 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:24.509 23:28:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:24.509 23:28:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:24.509 23:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:24.509 23:28:47 -- nvmf/common.sh@469 -- # nvmfpid=3042075 00:32:24.509 23:28:47 -- nvmf/common.sh@470 -- # waitforlisten 3042075 00:32:24.509 23:28:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:24.509 23:28:47 -- common/autotest_common.sh@819 -- # '[' -z 3042075 ']' 00:32:24.509 23:28:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.509 23:28:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:24.509 23:28:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.509 23:28:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:24.509 23:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:24.810 [2024-06-07 23:28:47.213397] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:24.810 [2024-06-07 23:28:47.213451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.810 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.810 [2024-06-07 23:28:47.279019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.810 [2024-06-07 23:28:47.306203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:24.810 [2024-06-07 23:28:47.306334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.810 [2024-06-07 23:28:47.306343] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.810 [2024-06-07 23:28:47.306349] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.810 [2024-06-07 23:28:47.306373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.403 23:28:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:25.403 23:28:47 -- common/autotest_common.sh@852 -- # return 0 00:32:25.403 23:28:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:25.403 23:28:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:25.403 23:28:47 -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 23:28:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.403 23:28:48 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:25.403 23:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.403 23:28:48 -- common/autotest_common.sh@10 -- # set +x 00:32:25.403 [2024-06-07 23:28:48.024411] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:25.403 23:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.403 23:28:48 -- host/digest.sh@104 -- # common_target_config 00:32:25.403 23:28:48 -- host/digest.sh@43 -- # rpc_cmd 00:32:25.403 23:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.403 23:28:48 -- common/autotest_common.sh@10 -- # set +x 00:32:25.664 null0 00:32:25.664 [2024-06-07 23:28:48.099127] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.664 [2024-06-07 23:28:48.123312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.664 23:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.664 23:28:48 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:25.664 23:28:48 -- host/digest.sh@54 -- # local rw bs qd 00:32:25.664 23:28:48 -- host/digest.sh@56 -- # rw=randread 00:32:25.664 23:28:48 -- host/digest.sh@56 -- # bs=4096 00:32:25.664 23:28:48 -- host/digest.sh@56 -- # qd=128 00:32:25.664 23:28:48 -- host/digest.sh@58 -- # bperfpid=3042372 00:32:25.664 23:28:48 -- host/digest.sh@60 -- # waitforlisten 3042372 /var/tmp/bperf.sock 00:32:25.664 23:28:48 -- common/autotest_common.sh@819 -- # '[' -z 3042372 ']' 00:32:25.664 23:28:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:25.664 23:28:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.664 23:28:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:25.664 23:28:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.664 23:28:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:25.664 23:28:48 -- common/autotest_common.sh@10 -- # set +x 00:32:25.664 [2024-06-07 23:28:48.176470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:25.664 [2024-06-07 23:28:48.176516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3042372 ] 00:32:25.664 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.664 [2024-06-07 23:28:48.252074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.664 [2024-06-07 23:28:48.279070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.604 23:28:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:26.604 23:28:48 -- common/autotest_common.sh@852 -- # return 0 00:32:26.604 23:28:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.604 23:28:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.604 23:28:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:26.604 23:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.604 23:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:26.605 23:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.605 23:28:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.605 23:28:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.865 nvme0n1 00:32:26.865 23:28:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:26.865 23:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.865 23:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:26.866 23:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.866 23:28:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:26.866 23:28:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:26.866 Running I/O for 2 seconds... 00:32:26.866 [2024-06-07 23:28:49.438537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.438567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.438575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.452787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.452805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.452812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.466229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.466251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.466258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.479164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.479182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.479189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.490126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.490144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.490151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.501812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.501828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.501835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.512773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.512790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.512796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.525081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.525098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.525109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.866 [2024-06-07 23:28:49.536124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:26.866 [2024-06-07 23:28:49.536141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.866 [2024-06-07 23:28:49.536148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.546842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.546858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.546865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.558803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.558820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.558826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.569337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.569353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.569360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.581492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.581508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.581515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.592261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.592277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.592284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.604979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.604996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.605002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.614929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.614952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.628186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.628207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.628213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.639935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.126 [2024-06-07 23:28:49.639952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.126 [2024-06-07 23:28:49.639958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.126 [2024-06-07 23:28:49.651040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.651057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.651063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.662091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.662107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.662114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.673619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.673636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.673642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.684636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.684660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.695642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.695658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.695665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.707561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.707578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.707584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.718272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.718289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.730175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.730191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.730197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.741069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.741086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.741093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.752060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.752077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.752082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.764093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.764109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.764115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.775250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.775267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.775273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.786285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.786302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.786308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.127 [2024-06-07 23:28:49.798021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.127 [2024-06-07 23:28:49.798038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.127 [2024-06-07 23:28:49.798044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.809062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.809079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.809085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.820158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.820175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.820184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.832316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.832333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.832340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.845246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.845262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.845269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.859990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.860007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.860013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.869063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.869080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.869086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.883302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.883325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.896419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.896436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.896442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.910348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.910365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.910372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.922955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.922971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.922977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.934375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.934392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.388 [2024-06-07 23:28:49.934398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.388 [2024-06-07 23:28:49.944713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.388 [2024-06-07 23:28:49.944729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:49.944735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:49.956970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:49.956987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:49.956993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:49.967580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:49.967596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:49.967603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:49.978057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:49.978074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:49.978080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:49.990513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:49.990530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:49.990536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.001487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.001504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.001511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.013040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.013058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.013065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.023651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.023669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.023678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.035855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.035872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.035878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.049076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.049092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.049098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.389 [2024-06-07 23:28:50.063076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.389 [2024-06-07 23:28:50.063093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.389 [2024-06-07 23:28:50.063099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.076553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.076570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.076576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.090030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.090046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.090054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.103969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.103986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.103992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.113226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.113246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.113254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.127990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.128008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.128014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.139877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.139896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.139903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.150635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.150651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.150658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.163259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.163275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.163281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.173884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.173901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.173907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.185775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.185791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.185798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.196954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.196970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.196977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.208722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.650 [2024-06-07 23:28:50.208738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.650 [2024-06-07 23:28:50.208744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.650 [2024-06-07 23:28:50.219747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.219765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.219772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.231103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.231119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.231125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.242671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.242688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.242695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.253633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.253648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.253655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.264612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.264628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.264635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.276599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.276615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.276621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.288415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.288431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.288437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.299187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.299204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.299210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.311961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.311977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.311983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.651 [2024-06-07 23:28:50.323758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.651 [2024-06-07 23:28:50.323775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.651 [2024-06-07 23:28:50.323781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.333991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.334008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.334018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.344116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.344132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.357719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.357735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.357742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.369526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.369543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.369549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.381153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.381168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.381175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.392339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.392356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.392362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.403502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.403519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.403525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.415084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.415101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.415107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.426735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.426757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.437511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.437527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.437533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.449055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.449071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.449077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.459962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.912 [2024-06-07 23:28:50.459978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.912 [2024-06-07 23:28:50.459984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.912 [2024-06-07 23:28:50.471075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.471091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.471097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.482681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.482698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.482704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.494653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.494669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.494675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.505472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.505488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.505494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.516384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.516400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.516406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.528275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.528291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.539113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.539130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.539136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.550924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.550940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.550947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.561927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.561943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.561949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.572810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.572826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.572832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.913 [2024-06-07 23:28:50.584735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:27.913 [2024-06-07 23:28:50.584751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.913 [2024-06-07 23:28:50.584757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.173 [2024-06-07 23:28:50.595644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.173 [2024-06-07 23:28:50.595661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.173 [2024-06-07 23:28:50.595667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.173 [2024-06-07 23:28:50.606755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.173 [2024-06-07 23:28:50.606771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.173 [2024-06-07 23:28:50.606777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.173 [2024-06-07 23:28:50.618576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.173 [2024-06-07 23:28:50.618591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.173 [2024-06-07 23:28:50.618597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.173 [2024-06-07 23:28:50.629704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.173 [2024-06-07 23:28:50.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.173 [2024-06-07 23:28:50.629729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.173 [2024-06-07 23:28:50.640520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.640536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.640542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.652580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.652597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.652603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.664006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.664022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.675113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.675129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.675135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.686075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.686091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.686097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.698422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.698438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.698444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.708973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.708990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.708996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.721427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.721443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.721449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.731748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.731764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.731771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.742798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.742815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.742821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.754657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.754674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.765189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.765206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.765212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.778229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.778248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.778255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.789974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.789990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.789996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.801355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.801371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.801378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.812453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.812470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.812476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.823355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.823380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.834973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.834989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.834995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.174 [2024-06-07 23:28:50.845867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.174 [2024-06-07 23:28:50.845883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.174 [2024-06-07 23:28:50.845889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.857679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.857695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.857702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.868691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.868706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.868713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.879646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.879663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.879669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.891560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.891577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.891583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.902679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.902695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.902702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.913661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.913677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.913683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.924664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.924680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.924686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.936544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.936561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.936567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.947804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.947820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.958766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.958782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.958789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.971019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.971035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.982023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.982039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.982045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:50.993762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:50.993778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:50.993784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.004900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.004917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.004922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.015945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.015962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.015972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.027614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.027630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.027636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.038589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.038605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.038611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.049576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.049592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.049598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.436 [2024-06-07 23:28:51.060416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.436 [2024-06-07 23:28:51.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.436 [2024-06-07 23:28:51.060439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.437 [2024-06-07 23:28:51.072876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.437 [2024-06-07 23:28:51.072893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.437 [2024-06-07 23:28:51.072899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.437 [2024-06-07 23:28:51.083994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.437 [2024-06-07 23:28:51.084010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.437 [2024-06-07 23:28:51.084017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.437 [2024-06-07 23:28:51.095608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.437 [2024-06-07 23:28:51.095625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.437 [2024-06-07 23:28:51.095632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.437 [2024-06-07 23:28:51.107430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.437 [2024-06-07 23:28:51.107447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.437 [2024-06-07 23:28:51.107453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.119082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.119101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.129431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.129448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.129454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.142111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.142128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.142134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.153052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.153069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.153075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.164011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.164027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.164033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.176012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.176034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.187158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.187175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.187181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.197996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.198013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.198019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.210129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.210146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.210152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.221577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.221593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.221599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.232675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.232691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.232697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.243624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.243640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.243647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.256009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.256026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.256032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.266886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.266903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.266909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.698 [2024-06-07 23:28:51.278330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.698 [2024-06-07 23:28:51.278346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.698 [2024-06-07 23:28:51.278352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.288069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.288086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.288092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.301254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.301270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.301276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.312296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.312313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.312322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.323256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.323273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.335074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.335090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.335096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.346258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.346275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.357200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.357217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.357223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.699 [2024-06-07 23:28:51.368018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.699 [2024-06-07 23:28:51.368035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.699 [2024-06-07 23:28:51.368042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.959 [2024-06-07 23:28:51.380115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.959 [2024-06-07 23:28:51.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.959 [2024-06-07 23:28:51.380139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.959 [2024-06-07 23:28:51.391925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.959 [2024-06-07 23:28:51.391942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.960 [2024-06-07 23:28:51.391948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.960 [2024-06-07 23:28:51.402295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.960 [2024-06-07 23:28:51.402311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.960 [2024-06-07 23:28:51.402317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.960 [2024-06-07 23:28:51.414214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.960 [2024-06-07 23:28:51.414233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.960 [2024-06-07 23:28:51.414239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.960 [2024-06-07 23:28:51.424340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2188470) 00:32:28.960 [2024-06-07 23:28:51.424357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.960 [2024-06-07 23:28:51.424363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.960 00:32:28.960 Latency(us) 00:32:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:28.960 nvme0n1 : 2.00 22050.20 86.13 0.00 0.00 5798.24 1925.12 14636.37 00:32:28.960 =================================================================================================================== 00:32:28.960 Total : 22050.20 86.13 0.00 0.00 5798.24 1925.12 14636.37 00:32:28.960 0 00:32:28.960 23:28:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:28.960 23:28:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:28.960 | .driver_specific 00:32:28.960 | .nvme_error 00:32:28.960 | .status_code 00:32:28.960 | .command_transient_transport_error' 00:32:28.960 23:28:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:28.960 23:28:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:28.960 23:28:51 -- host/digest.sh@71 -- # (( 173 > 0 )) 00:32:28.960 23:28:51 -- host/digest.sh@73 -- # killprocess 3042372 00:32:28.960 23:28:51 -- common/autotest_common.sh@926 -- # '[' -z 3042372 ']' 00:32:28.960 23:28:51 -- common/autotest_common.sh@930 -- # kill -0 3042372 00:32:28.960 23:28:51 -- common/autotest_common.sh@931 -- # uname 00:32:28.960 23:28:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:28.960 23:28:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3042372 00:32:29.221 23:28:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:29.221 23:28:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:29.221 23:28:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3042372' 00:32:29.221 killing process with pid 3042372 00:32:29.221 23:28:51 -- common/autotest_common.sh@945 -- # kill 3042372 00:32:29.221 Received shutdown signal, test time was about 2.000000 seconds 00:32:29.221 00:32:29.221 Latency(us) 00:32:29.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.221 =================================================================================================================== 00:32:29.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.221 23:28:51 -- common/autotest_common.sh@950 -- # wait 3042372 00:32:29.221 23:28:51 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:29.221 23:28:51 -- host/digest.sh@54 -- # local rw bs qd 00:32:29.221 23:28:51 -- host/digest.sh@56 -- # rw=randread 00:32:29.221 23:28:51 -- host/digest.sh@56 -- # bs=131072 00:32:29.221 23:28:51 -- host/digest.sh@56 -- # qd=16 00:32:29.221 23:28:51 -- host/digest.sh@58 -- # bperfpid=3043114 00:32:29.221 23:28:51 -- host/digest.sh@60 -- # waitforlisten 3043114 /var/tmp/bperf.sock 00:32:29.221 23:28:51 -- common/autotest_common.sh@819 -- # '[' -z 3043114 ']' 00:32:29.221 23:28:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:29.221 23:28:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.221 23:28:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:29.221 23:28:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.221 23:28:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:29.221 23:28:51 -- common/autotest_common.sh@10 -- # set +x 00:32:29.221 [2024-06-07 23:28:51.806997] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:29.221 [2024-06-07 23:28:51.807051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043114 ] 00:32:29.221 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:29.221 Zero copy mechanism will not be used. 00:32:29.221 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.221 [2024-06-07 23:28:51.880669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.482 [2024-06-07 23:28:51.907271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.053 23:28:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:30.053 23:28:52 -- common/autotest_common.sh@852 -- # return 0 00:32:30.053 23:28:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.053 23:28:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.313 23:28:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:30.313 23:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.313 23:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:30.313 23:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.313 23:28:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.313 23:28:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.313 nvme0n1 00:32:30.313 23:28:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:30.313 23:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.313 23:28:52 -- common/autotest_common.sh@10 -- # set +x 00:32:30.313 23:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.313 23:28:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:30.313 23:28:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:30.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:30.574 Zero copy mechanism will not be used. 00:32:30.574 Running I/O for 2 seconds... 00:32:30.574 [2024-06-07 23:28:53.070256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.070286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.070295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.080263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.080284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.080292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.090588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.090606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.090616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.101192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.101217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.112296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.112314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.112320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.122686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.134172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.134189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.134196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.145177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.145196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.145202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.156539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.156557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.156563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.165590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.165607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.165613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.175520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.175543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.187720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.187742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.187748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.198782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.198800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.198806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.210161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.210178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.210185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.221848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.221866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.221872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.232917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.232934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.232941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.574 [2024-06-07 23:28:53.243812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.574 [2024-06-07 23:28:53.243829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.574 [2024-06-07 23:28:53.243835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.255344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.255361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.255367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.265870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.265887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.265893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.278488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.278506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.278512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.288718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.288735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.288741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.298655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.298673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.298679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.310636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.310653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.310659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.321823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.321842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.321848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.330705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.330723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.330729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.342716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.342733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.342739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.352931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.352948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.352954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.364710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.364727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.364733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.374415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.374432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.374443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.383461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.383478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.383484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.392402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.392425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.402250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.402268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.402274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.413986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.414009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.426945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.426963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.426969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.440795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.440812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.440818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.454478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.454495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.454502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.468783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.468800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.468807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.483140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.483161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.483168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.495490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.495506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.495512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.503861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.503877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.503883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:30.835 [2024-06-07 23:28:53.513793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:30.835 [2024-06-07 23:28:53.513809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.835 [2024-06-07 23:28:53.513815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.524018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.524035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.534554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.534571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.534577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.542954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.542971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.542977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.551479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.551496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.551503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.560679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.560696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.560705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.570624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.570640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.570647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.581668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.581686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.581692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.591454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.591471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.591477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.603372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.603395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.614974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.614991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.614997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.625536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.097 [2024-06-07 23:28:53.625554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.097 [2024-06-07 23:28:53.625559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.097 [2024-06-07 23:28:53.637333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.637351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.637357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.648581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.648598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.648604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.657813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.657833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.657838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.666789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.666805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.666812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.675914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.675937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.684785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.684802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.684809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.696410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.696427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.696433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.706811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.706827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.706833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.718713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.718730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.718736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.729456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.729474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.729480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.740769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.740787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.740793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.751355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.751373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.751379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.762249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.762266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.762272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.098 [2024-06-07 23:28:53.773146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.098 [2024-06-07 23:28:53.773163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.098 [2024-06-07 23:28:53.773169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.359 [2024-06-07 23:28:53.784737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.359 [2024-06-07 23:28:53.784754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.359 [2024-06-07 23:28:53.784761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.359 [2024-06-07 23:28:53.794872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.359 [2024-06-07 23:28:53.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.359 [2024-06-07 23:28:53.794895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.359 [2024-06-07 23:28:53.802609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.359 [2024-06-07 23:28:53.802626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.359 [2024-06-07 23:28:53.802633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.359 [2024-06-07 23:28:53.811781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.359 [2024-06-07 23:28:53.811798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.359 [2024-06-07 23:28:53.811804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.359 [2024-06-07 23:28:53.822565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.359 [2024-06-07 23:28:53.822582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.359 [2024-06-07 23:28:53.822588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.831998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.832015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.832024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.841992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.842009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.842015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.852425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.852447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.861997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.862014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.862020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.872207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.872224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.872230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.879894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.879911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.879917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.889475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.889493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.889499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.900241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.900261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.900267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.912330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.912347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.912353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.923790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.923810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.923816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.935609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.935626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.935632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.946526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.946543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.946549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.957569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.957586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.957592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.968699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.968715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.968721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.979204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.979227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:53.990452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:53.990469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:53.990475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:54.000226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:54.000249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:54.000255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:54.009287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:54.009304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:54.009311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:54.019228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:54.019249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:54.019255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:54.029892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:54.029909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:54.029915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.360 [2024-06-07 23:28:54.039179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.360 [2024-06-07 23:28:54.039196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.360 [2024-06-07 23:28:54.039202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.048756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.048774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.048780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.059080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.059097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.059104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.067850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.067867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.067873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.077751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.077768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.077774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.090425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.090442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.090448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.101316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.101332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.101341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.110061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.110078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.110084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.117603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.117620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.117626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.121861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.121878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.121884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.130625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.622 [2024-06-07 23:28:54.130642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.622 [2024-06-07 23:28:54.130648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.622 [2024-06-07 23:28:54.142381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.142398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.142404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.152057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.152074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.152080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.162831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.162849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.162854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.172562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.172578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.172584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.184213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.184230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.184237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.192735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.192752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.192758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.201163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.201181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.201187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.208940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.208957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.208963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.216009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.216026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.225856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.225873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.225879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.234265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.234282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.234288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.243550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.243567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.243573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.254854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.254881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.266874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.266892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.266898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.277080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.277096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.277102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.287578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.287595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.287600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.623 [2024-06-07 23:28:54.298625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.623 [2024-06-07 23:28:54.298642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.623 [2024-06-07 23:28:54.298648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.309778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.309796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.309802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.319708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.319726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.319732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.330542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.330559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.341083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.341100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.341106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.351294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.351313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.351319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.362374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.362391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.362397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.372432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.372449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.372455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.381974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.381991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.381997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.392674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.392692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.392698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.403161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.403178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.403184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.413856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.413873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.413879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.424924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.424941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.424947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.435499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.435516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.435522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.446149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.446166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.446172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.457891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.457909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.457915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.469196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.469214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.469220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.480651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.480668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.491509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.491527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.491533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.502107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.886 [2024-06-07 23:28:54.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.886 [2024-06-07 23:28:54.502130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.886 [2024-06-07 23:28:54.512042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.512059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.512066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.887 [2024-06-07 23:28:54.521713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.521731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.521738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.887 [2024-06-07 23:28:54.531126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.531143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.531154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.887 [2024-06-07 23:28:54.541277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.541295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.541300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.887 [2024-06-07 23:28:54.551188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.551212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.887 [2024-06-07 23:28:54.562132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:31.887 [2024-06-07 23:28:54.562149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.887 [2024-06-07 23:28:54.562155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.571616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.571633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.581932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.581950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.581957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.591661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.591678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.591684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.602640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.602657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.602663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.614167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.614183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.614189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.624449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.624469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.624475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.635464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.635481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.635487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.645097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.645115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.645121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.657133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.657151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.657156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.668337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.668354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.668360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.678240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.149 [2024-06-07 23:28:54.678268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.149 [2024-06-07 23:28:54.688171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.149 [2024-06-07 23:28:54.688188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.688195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.699462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.699479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.699486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.709602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.709620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.709631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.719861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.719878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.719883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.730037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.730054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.730060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.740282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.740300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.740305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.750983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.750999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.751005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.763293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.763310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.763316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.775122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.775139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.775145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.781804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.781822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.781828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.791578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.791594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.791600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.802204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.802225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.813619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.813643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.150 [2024-06-07 23:28:54.823935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.150 [2024-06-07 23:28:54.823953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.150 [2024-06-07 23:28:54.823959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.833592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.833610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.833616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.843534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.843551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.843557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.854550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.854568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.854574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.864940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.864956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.864962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.875899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.875917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.887063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.887081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.412 [2024-06-07 23:28:54.887086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.412 [2024-06-07 23:28:54.897283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.412 [2024-06-07 23:28:54.897299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.897305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.906853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.906870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.906876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.916974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.916992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.916998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.926813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.926830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.926836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.937286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.937303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.947588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.947606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.957127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.957144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.957150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.966352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.966369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.966375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.976487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.976504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.976514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.986711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.986729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.986735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:54.997927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:54.997945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:54.997950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.008932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.008950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.008956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.019729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.019746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.019753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.029566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.029583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.029589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.040688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.040705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.040711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.052066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.052090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.413 [2024-06-07 23:28:55.060303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x211fd00) 00:32:32.413 [2024-06-07 23:28:55.060321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.413 [2024-06-07 23:28:55.060327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.413 00:32:32.413 Latency(us) 00:32:32.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.413 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:32.413 nvme0n1 : 2.00 2981.75 372.72 0.00 0.00 5362.92 870.40 15510.19 00:32:32.413 =================================================================================================================== 00:32:32.413 Total : 2981.75 372.72 0.00 0.00 5362.92 870.40 15510.19 00:32:32.413 0 00:32:32.413 23:28:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:32.413 23:28:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:32.413 | .driver_specific 00:32:32.413 | .nvme_error 00:32:32.413 | .status_code 00:32:32.413 | .command_transient_transport_error' 00:32:32.413 23:28:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:32.413 23:28:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:32.674 23:28:55 -- host/digest.sh@71 -- # (( 192 > 0 )) 00:32:32.674 23:28:55 -- host/digest.sh@73 -- # killprocess 3043114 00:32:32.674 23:28:55 -- common/autotest_common.sh@926 -- # '[' -z 3043114 ']' 00:32:32.674 23:28:55 -- common/autotest_common.sh@930 -- # kill -0 3043114 00:32:32.674 23:28:55 -- common/autotest_common.sh@931 -- # uname 00:32:32.674 23:28:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:32.674 23:28:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3043114 00:32:32.674 23:28:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:32.674 23:28:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:32.674 23:28:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3043114' 00:32:32.674 killing process with pid 3043114 00:32:32.674 23:28:55 -- common/autotest_common.sh@945 -- # kill 3043114 00:32:32.674 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.674 00:32:32.674 Latency(us) 00:32:32.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.674 =================================================================================================================== 00:32:32.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.674 23:28:55 -- common/autotest_common.sh@950 -- # wait 3043114 00:32:32.935 23:28:55 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:32.935 23:28:55 -- host/digest.sh@54 -- # local rw bs qd 00:32:32.935 23:28:55 -- host/digest.sh@56 -- # rw=randwrite 00:32:32.935 23:28:55 -- host/digest.sh@56 -- # bs=4096 00:32:32.935 23:28:55 -- host/digest.sh@56 -- # qd=128 00:32:32.935 23:28:55 -- host/digest.sh@58 -- # bperfpid=3043821 00:32:32.935 23:28:55 -- host/digest.sh@60 -- # waitforlisten 3043821 /var/tmp/bperf.sock 00:32:32.935 23:28:55 -- common/autotest_common.sh@819 -- # '[' -z 3043821 ']' 00:32:32.935 23:28:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:32.935 23:28:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:32.935 23:28:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:32.935 23:28:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:32.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:32.935 23:28:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:32.935 23:28:55 -- common/autotest_common.sh@10 -- # set +x 00:32:32.935 [2024-06-07 23:28:55.452860] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:32.935 [2024-06-07 23:28:55.452949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043821 ] 00:32:32.935 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.935 [2024-06-07 23:28:55.530894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.935 [2024-06-07 23:28:55.556920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.876 23:28:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:33.876 23:28:56 -- common/autotest_common.sh@852 -- # return 0 00:32:33.876 23:28:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:33.876 23:28:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:33.876 23:28:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:33.876 23:28:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:33.876 23:28:56 -- common/autotest_common.sh@10 -- # set +x 00:32:33.876 23:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:33.876 23:28:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:33.876 23:28:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.137 nvme0n1 00:32:34.137 23:28:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:34.137 23:28:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.137 23:28:56 -- common/autotest_common.sh@10 -- # set +x 00:32:34.137 23:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.137 23:28:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:34.137 23:28:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:34.137 Running I/O for 2 seconds... 00:32:34.137 [2024-06-07 23:28:56.733505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fa7d8 00:32:34.137 [2024-06-07 23:28:56.733952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.733977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:34.137 [2024-06-07 23:28:56.745934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1430 00:32:34.137 [2024-06-07 23:28:56.746000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.746017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:34.137 [2024-06-07 23:28:56.759682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fac10 00:32:34.137 [2024-06-07 23:28:56.761346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.761380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.137 [2024-06-07 23:28:56.771086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f0ff8 00:32:34.137 [2024-06-07 23:28:56.772606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.772623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:34.137 [2024-06-07 23:28:56.780708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1868 00:32:34.137 [2024-06-07 23:28:56.780843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.780858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:34.137 [2024-06-07 23:28:56.792113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6020 00:32:34.137 [2024-06-07 23:28:56.792231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.137 [2024-06-07 23:28:56.792250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:34.138 [2024-06-07 23:28:56.803482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190feb58 00:32:34.138 [2024-06-07 23:28:56.803571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.138 [2024-06-07 23:28:56.803586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:34.138 [2024-06-07 23:28:56.816659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1ca0 00:32:34.138 [2024-06-07 23:28:56.817550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.138 [2024-06-07 23:28:56.817567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.828388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de8a8 00:32:34.399 [2024-06-07 23:28:56.829931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.829947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.839411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de038 00:32:34.399 [2024-06-07 23:28:56.840254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.840270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.851454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ecc78 00:32:34.399 [2024-06-07 23:28:56.853165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.853181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.860865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e99d8 00:32:34.399 [2024-06-07 23:28:56.861027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.874070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e99d8 00:32:34.399 [2024-06-07 23:28:56.874877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.874893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.885870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eb760 00:32:34.399 [2024-06-07 23:28:56.886669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.886685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.897306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e8d30 00:32:34.399 [2024-06-07 23:28:56.898102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.898118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.908731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ec408 00:32:34.399 [2024-06-07 23:28:56.909533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.909549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.920145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e8088 00:32:34.399 [2024-06-07 23:28:56.920950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.920966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.931580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de038 00:32:34.399 [2024-06-07 23:28:56.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.932397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.943013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e73e0 00:32:34.399 [2024-06-07 23:28:56.943816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.943832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.954439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190dece0 00:32:34.399 [2024-06-07 23:28:56.955234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.399 [2024-06-07 23:28:56.955253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:34.399 [2024-06-07 23:28:56.966034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ec408 00:32:34.400 [2024-06-07 23:28:56.967152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:56.967169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:56.976142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0a68 00:32:34.400 [2024-06-07 23:28:56.976889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:56.976905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:56.987401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eff18 00:32:34.400 [2024-06-07 23:28:56.988504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:56.988524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:56.999055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ef270 00:32:34.400 [2024-06-07 23:28:56.999339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:56.999355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.010558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ebfd0 00:32:34.400 [2024-06-07 23:28:57.010810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.010825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.021914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f8e88 00:32:34.400 [2024-06-07 23:28:57.022147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.022162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.035181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e4de8 00:32:34.400 [2024-06-07 23:28:57.036532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.036548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.045559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190efae0 00:32:34.400 [2024-06-07 23:28:57.046443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.046459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.056945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f96f8 00:32:34.400 [2024-06-07 23:28:57.058107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.058122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.068312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fe720 00:32:34.400 [2024-06-07 23:28:57.069476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.400 [2024-06-07 23:28:57.069505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:34.400 [2024-06-07 23:28:57.079718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f9b30 00:32:34.661 [2024-06-07 23:28:57.080833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.080848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.091103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fb048 00:32:34.661 [2024-06-07 23:28:57.092273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.092288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.102469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e3498 00:32:34.661 [2024-06-07 23:28:57.103570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.103599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.113847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e9e10 00:32:34.661 [2024-06-07 23:28:57.114936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.114977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.125210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190dfdc0 00:32:34.661 [2024-06-07 23:28:57.126345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.126360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.136578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f20d8 00:32:34.661 [2024-06-07 23:28:57.137673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.137688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.147931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e7818 00:32:34.661 [2024-06-07 23:28:57.149055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.661 [2024-06-07 23:28:57.149071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:34.661 [2024-06-07 23:28:57.159299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0ea0 00:32:34.661 [2024-06-07 23:28:57.160391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.170011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fbcf0 00:32:34.662 [2024-06-07 23:28:57.170279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.182295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f0ff8 00:32:34.662 [2024-06-07 23:28:57.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.183382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.193474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6890 00:32:34.662 [2024-06-07 23:28:57.193675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.193689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.204937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f0ff8 00:32:34.662 [2024-06-07 23:28:57.205119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.205133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.216157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f57b0 00:32:34.662 [2024-06-07 23:28:57.216327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.216342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.227838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ed4e8 00:32:34.662 [2024-06-07 23:28:57.227979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.227993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.239171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f46d0 00:32:34.662 [2024-06-07 23:28:57.239308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.239324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.250398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ee5c8 00:32:34.662 [2024-06-07 23:28:57.250500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.250516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.261978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fdeb0 00:32:34.662 [2024-06-07 23:28:57.262070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.273389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e1b48 00:32:34.662 [2024-06-07 23:28:57.273452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.273467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.286953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f57b0 00:32:34.662 [2024-06-07 23:28:57.288665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.288684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.298342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fe720 00:32:34.662 [2024-06-07 23:28:57.300021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.300037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.309704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e6300 00:32:34.662 [2024-06-07 23:28:57.311401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.311417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.320652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f2948 00:32:34.662 [2024-06-07 23:28:57.321475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.321489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:34.662 [2024-06-07 23:28:57.332438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eff18 00:32:34.662 [2024-06-07 23:28:57.333257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.662 [2024-06-07 23:28:57.333273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.343860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fcdd0 00:32:34.924 [2024-06-07 23:28:57.344683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.344698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.355312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1ca0 00:32:34.924 [2024-06-07 23:28:57.356127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.356142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.366746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f20d8 00:32:34.924 [2024-06-07 23:28:57.367567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.367582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.378189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f57b0 00:32:34.924 [2024-06-07 23:28:57.379010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.379025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.389592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e3060 00:32:34.924 [2024-06-07 23:28:57.390417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.390433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.400993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190feb58 00:32:34.924 [2024-06-07 23:28:57.401815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.401831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.412413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ebfd0 00:32:34.924 [2024-06-07 23:28:57.413230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.413249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.423008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f7100 00:32:34.924 [2024-06-07 23:28:57.423307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.423323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.434347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eea00 00:32:34.924 [2024-06-07 23:28:57.434621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.434637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.447427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0a68 00:32:34.924 [2024-06-07 23:28:57.448491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.448507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.456618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fc998 00:32:34.924 [2024-06-07 23:28:57.457567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.457582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.467999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f5378 00:32:34.924 [2024-06-07 23:28:57.468824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.468840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.482113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fcdd0 00:32:34.924 [2024-06-07 23:28:57.483194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.483210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.494139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f7da8 00:32:34.924 [2024-06-07 23:28:57.495514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.495529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.503939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eee38 00:32:34.924 [2024-06-07 23:28:57.504504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.504520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.515473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4298 00:32:34.924 [2024-06-07 23:28:57.516010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.516025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.526801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0630 00:32:34.924 [2024-06-07 23:28:57.527323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.527338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.538466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f2d80 00:32:34.924 [2024-06-07 23:28:57.539133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.539149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.549884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ebfd0 00:32:34.924 [2024-06-07 23:28:57.550553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.550568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:34.924 [2024-06-07 23:28:57.561177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e6b70 00:32:34.924 [2024-06-07 23:28:57.562938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.924 [2024-06-07 23:28:57.562968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:34.925 [2024-06-07 23:28:57.572645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e8088 00:32:34.925 [2024-06-07 23:28:57.574154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.925 [2024-06-07 23:28:57.574169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:34.925 [2024-06-07 23:28:57.583294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e5a90 00:32:34.925 [2024-06-07 23:28:57.583793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.925 [2024-06-07 23:28:57.583811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:34.925 [2024-06-07 23:28:57.594225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eb328 00:32:34.925 [2024-06-07 23:28:57.594616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:34.925 [2024-06-07 23:28:57.594632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.605478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de038 00:32:35.187 [2024-06-07 23:28:57.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.605943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.616882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f0bc0 00:32:35.187 [2024-06-07 23:28:57.617156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.617171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.628236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fda78 00:32:35.187 [2024-06-07 23:28:57.628493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.628507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.640262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de470 00:32:35.187 [2024-06-07 23:28:57.641000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.651642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f8e88 00:32:35.187 [2024-06-07 23:28:57.651762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.651777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.662993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fd640 00:32:35.187 [2024-06-07 23:28:57.663092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.663107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.674411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f5be8 00:32:35.187 [2024-06-07 23:28:57.674490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.674504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.686093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ec408 00:32:35.187 [2024-06-07 23:28:57.686182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.686200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.697481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f7da8 00:32:35.187 [2024-06-07 23:28:57.697549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.697564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.711006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fc998 00:32:35.187 [2024-06-07 23:28:57.712324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.712340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.722501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6020 00:32:35.187 [2024-06-07 23:28:57.724191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.724206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.733857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e3060 00:32:35.187 [2024-06-07 23:28:57.735562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.735578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.744493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190feb58 00:32:35.187 [2024-06-07 23:28:57.745178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.745193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.754478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f96f8 00:32:35.187 [2024-06-07 23:28:57.755651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.755667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.765989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e38d0 00:32:35.187 [2024-06-07 23:28:57.767003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.767018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.777539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4f40 00:32:35.187 [2024-06-07 23:28:57.778126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.778141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.791181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fcdd0 00:32:35.187 [2024-06-07 23:28:57.792924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.792940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.800750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f9b30 00:32:35.187 [2024-06-07 23:28:57.800818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.800833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.814466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fc998 00:32:35.187 [2024-06-07 23:28:57.816070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.816086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.825982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f57b0 00:32:35.187 [2024-06-07 23:28:57.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.827189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.837478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e3498 00:32:35.187 [2024-06-07 23:28:57.838033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.838049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.848845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f2510 00:32:35.187 [2024-06-07 23:28:57.849386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.849401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:35.187 [2024-06-07 23:28:57.860236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f46d0 00:32:35.187 [2024-06-07 23:28:57.860759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.187 [2024-06-07 23:28:57.860775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.871951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f35f0 00:32:35.449 [2024-06-07 23:28:57.872478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.449 [2024-06-07 23:28:57.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.883336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ee5c8 00:32:35.449 [2024-06-07 23:28:57.883841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.449 [2024-06-07 23:28:57.883857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.894709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e38d0 00:32:35.449 [2024-06-07 23:28:57.895193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.449 [2024-06-07 23:28:57.895209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.905917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0ea0 00:32:35.449 [2024-06-07 23:28:57.906385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.449 [2024-06-07 23:28:57.906401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.917288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0a68 00:32:35.449 [2024-06-07 23:28:57.917734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.449 [2024-06-07 23:28:57.917749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:35.449 [2024-06-07 23:28:57.928848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de038 00:32:35.449 [2024-06-07 23:28:57.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.929299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.940038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fc560 00:32:35.450 [2024-06-07 23:28:57.940462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.940478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.951598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e99d8 00:32:35.450 [2024-06-07 23:28:57.951985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.963383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e1710 00:32:35.450 [2024-06-07 23:28:57.963827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.963842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.974767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4f40 00:32:35.450 [2024-06-07 23:28:57.975186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.986157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e4de8 00:32:35.450 [2024-06-07 23:28:57.986562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.986581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:57.996707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e49b0 00:32:35.450 [2024-06-07 23:28:57.997791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:57.997807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.009530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:35.450 [2024-06-07 23:28:58.010244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.010261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.020934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e12d8 00:32:35.450 [2024-06-07 23:28:58.021651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.021666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.032348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e1710 00:32:35.450 [2024-06-07 23:28:58.033063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.033079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.043783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f8e88 00:32:35.450 [2024-06-07 23:28:58.044503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.044518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.055220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ef270 00:32:35.450 [2024-06-07 23:28:58.056129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.056144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.066868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1868 00:32:35.450 [2024-06-07 23:28:58.067592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.078309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e01f8 00:32:35.450 [2024-06-07 23:28:58.079025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.079040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.089717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f96f8 00:32:35.450 [2024-06-07 23:28:58.090440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.090455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.101131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fa3a0 00:32:35.450 [2024-06-07 23:28:58.101854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.101870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.112581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6020 00:32:35.450 [2024-06-07 23:28:58.113298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.113313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:35.450 [2024-06-07 23:28:58.124045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0a68 00:32:35.450 [2024-06-07 23:28:58.124765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.450 [2024-06-07 23:28:58.124780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.135524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e6b70 00:32:35.712 [2024-06-07 23:28:58.136239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.136258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.146948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ed920 00:32:35.712 [2024-06-07 23:28:58.147671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.147686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.158390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f5378 00:32:35.712 [2024-06-07 23:28:58.159107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.159122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.169901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6cc8 00:32:35.712 [2024-06-07 23:28:58.170621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.181335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f9b30 00:32:35.712 [2024-06-07 23:28:58.182050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.182065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.192774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fac10 00:32:35.712 [2024-06-07 23:28:58.193493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.193509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.204210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e3d08 00:32:35.712 [2024-06-07 23:28:58.204929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.204944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.216434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190feb58 00:32:35.712 [2024-06-07 23:28:58.217430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.217445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.228240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fa3a0 00:32:35.712 [2024-06-07 23:28:58.229228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.229247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.240062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fcdd0 00:32:35.712 [2024-06-07 23:28:58.241045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.241060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.251813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1ca0 00:32:35.712 [2024-06-07 23:28:58.252784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.252799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.263595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e12d8 00:32:35.712 [2024-06-07 23:28:58.264561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.264577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.272893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6020 00:32:35.712 [2024-06-07 23:28:58.273054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.286165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e88f8 00:32:35.712 [2024-06-07 23:28:58.287102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.287123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.297950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190feb58 00:32:35.712 [2024-06-07 23:28:58.298887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.298903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.309286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e8d30 00:32:35.712 [2024-06-07 23:28:58.310878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.310894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.320336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4b08 00:32:35.712 [2024-06-07 23:28:58.321228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.321246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.330691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e0a68 00:32:35.712 [2024-06-07 23:28:58.331114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.331130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.342027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e7c50 00:32:35.712 [2024-06-07 23:28:58.343030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.343046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.353410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fd640 00:32:35.712 [2024-06-07 23:28:58.354416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.354432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.364757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fdeb0 00:32:35.712 [2024-06-07 23:28:58.365757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.365787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.376155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ecc78 00:32:35.712 [2024-06-07 23:28:58.377134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.377149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:35.712 [2024-06-07 23:28:58.387507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f3a28 00:32:35.712 [2024-06-07 23:28:58.388480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.712 [2024-06-07 23:28:58.388495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.398863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fc128 00:32:35.974 [2024-06-07 23:28:58.399831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.399846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.410240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f7100 00:32:35.974 [2024-06-07 23:28:58.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.411247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.421623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f0350 00:32:35.974 [2024-06-07 23:28:58.422565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.422581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.433008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e23b8 00:32:35.974 [2024-06-07 23:28:58.433991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.434007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.444381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f31b8 00:32:35.974 [2024-06-07 23:28:58.445338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.445353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.455295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f3a28 00:32:35.974 [2024-06-07 23:28:58.455765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.455780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.468947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eaab8 00:32:35.974 [2024-06-07 23:28:58.470097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.470113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.479568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4298 00:32:35.974 [2024-06-07 23:28:58.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.480350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.490751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e1b48 00:32:35.974 [2024-06-07 23:28:58.491688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.491704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.501424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ee5c8 00:32:35.974 [2024-06-07 23:28:58.501904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.501920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.512915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fd208 00:32:35.974 [2024-06-07 23:28:58.513745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.513760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.524320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1ca0 00:32:35.974 [2024-06-07 23:28:58.525168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.525184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.535711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f92c0 00:32:35.974 [2024-06-07 23:28:58.536568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.536584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.547106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eb328 00:32:35.974 [2024-06-07 23:28:58.547923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.547938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.558501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de8a8 00:32:35.974 [2024-06-07 23:28:58.559293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.559309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.572048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6020 00:32:35.974 [2024-06-07 23:28:58.572702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.572718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.583423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f1868 00:32:35.974 [2024-06-07 23:28:58.584050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.584074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.594805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e2c28 00:32:35.974 [2024-06-07 23:28:58.595417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.595432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.606200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f7970 00:32:35.974 [2024-06-07 23:28:58.606885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.606900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.617763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f57b0 00:32:35.974 [2024-06-07 23:28:58.618334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.618350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.629011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f4298 00:32:35.974 [2024-06-07 23:28:58.629560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.974 [2024-06-07 23:28:58.629576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:35.974 [2024-06-07 23:28:58.640524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190eea00 00:32:35.975 [2024-06-07 23:28:58.641048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.975 [2024-06-07 23:28:58.641063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:35.975 [2024-06-07 23:28:58.651730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e7c50 00:32:35.975 [2024-06-07 23:28:58.652232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.975 [2024-06-07 23:28:58.652252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.663117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fda78 00:32:36.235 [2024-06-07 23:28:58.663607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.663623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.674695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190f6890 00:32:36.235 [2024-06-07 23:28:58.675159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.675174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.685893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e6fa8 00:32:36.235 [2024-06-07 23:28:58.686379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.686394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.698843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190ea680 00:32:36.235 [2024-06-07 23:28:58.699934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.699949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.710231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190e5658 00:32:36.235 [2024-06-07 23:28:58.711438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.711453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.235 [2024-06-07 23:28:58.719602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190de470 00:32:36.235 [2024-06-07 23:28:58.719868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.235 [2024-06-07 23:28:58.719883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.235 00:32:36.235 Latency(us) 00:32:36.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.235 nvme0n1 : 2.00 22251.46 86.92 0.00 0.00 5747.41 2389.33 16493.23 00:32:36.235 =================================================================================================================== 00:32:36.235 Total : 22251.46 86.92 0.00 0.00 5747.41 2389.33 16493.23 00:32:36.235 0 00:32:36.235 23:28:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:36.235 23:28:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:36.235 23:28:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:36.235 23:28:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:36.235 | .driver_specific 00:32:36.236 | .nvme_error 00:32:36.236 | .status_code 00:32:36.236 | .command_transient_transport_error' 00:32:36.236 23:28:58 -- host/digest.sh@71 -- # (( 174 > 0 )) 00:32:36.236 23:28:58 -- host/digest.sh@73 -- # killprocess 3043821 00:32:36.236 23:28:58 -- common/autotest_common.sh@926 -- # '[' -z 3043821 ']' 00:32:36.236 23:28:58 -- common/autotest_common.sh@930 -- # kill -0 3043821 00:32:36.236 23:28:58 -- common/autotest_common.sh@931 -- # uname 00:32:36.236 23:28:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:36.236 23:28:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3043821 00:32:36.496 23:28:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:36.496 23:28:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:36.496 23:28:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3043821' 00:32:36.496 killing process with pid 3043821 00:32:36.496 23:28:58 -- common/autotest_common.sh@945 -- # kill 3043821 00:32:36.496 Received shutdown signal, test time was about 2.000000 seconds 00:32:36.496 00:32:36.496 Latency(us) 00:32:36.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.496 =================================================================================================================== 00:32:36.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.496 23:28:58 -- common/autotest_common.sh@950 -- # wait 3043821 00:32:36.496 23:28:59 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:36.496 23:28:59 -- host/digest.sh@54 -- # local rw bs qd 00:32:36.496 23:28:59 -- host/digest.sh@56 -- # rw=randwrite 00:32:36.496 23:28:59 -- host/digest.sh@56 -- # bs=131072 00:32:36.496 23:28:59 -- host/digest.sh@56 -- # qd=16 00:32:36.496 23:28:59 -- host/digest.sh@58 -- # bperfpid=3044512 00:32:36.496 23:28:59 -- host/digest.sh@60 -- # waitforlisten 3044512 /var/tmp/bperf.sock 00:32:36.496 23:28:59 -- common/autotest_common.sh@819 -- # '[' -z 3044512 ']' 00:32:36.496 23:28:59 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:36.496 23:28:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:36.496 23:28:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:36.496 23:28:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:36.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:36.496 23:28:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:36.496 23:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:36.496 [2024-06-07 23:28:59.114259] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:36.496 [2024-06-07 23:28:59.114314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044512 ] 00:32:36.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:36.496 Zero copy mechanism will not be used. 00:32:36.496 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.757 [2024-06-07 23:28:59.190976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.757 [2024-06-07 23:28:59.217786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.328 23:28:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:37.328 23:28:59 -- common/autotest_common.sh@852 -- # return 0 00:32:37.328 23:28:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.328 23:28:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:37.588 23:29:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:37.588 23:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.588 23:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:37.588 23:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.588 23:29:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.588 23:29:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:37.848 nvme0n1 00:32:37.848 23:29:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:37.848 23:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.848 23:29:00 -- common/autotest_common.sh@10 -- # set +x 00:32:37.848 23:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.848 23:29:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:37.848 23:29:00 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.848 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:37.848 Zero copy mechanism will not be used. 00:32:37.848 Running I/O for 2 seconds... 00:32:38.109 [2024-06-07 23:29:00.534688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.534936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.534961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.547807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.547940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.547957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.558293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.558480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.558496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.568177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.568280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.568296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.578621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.578774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.578790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.588959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.589071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.589087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.599772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.599936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.599951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.610393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.610586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.610601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.621766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.622217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.622234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.633304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.109 [2024-06-07 23:29:00.633704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.109 [2024-06-07 23:29:00.633723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.109 [2024-06-07 23:29:00.645231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.645410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.645425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.656757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.656966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.656981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.667673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.667804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.667819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.677288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.677389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.677404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.687356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.687459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.687474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.697213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.697574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.697591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.706221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.706444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.706459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.715232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.715415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.715430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.724367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.724479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.724494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.733056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.733173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.733188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.742366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.742507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.742524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.751328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.751445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.751461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.760481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.760579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.760596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.769547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.769887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.769904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.778529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.778857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.778872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.110 [2024-06-07 23:29:00.788739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.110 [2024-06-07 23:29:00.788850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.110 [2024-06-07 23:29:00.788865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.797393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.797709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.797724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.808134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.808283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.808298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.818350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.818628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.818643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.828905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.829245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.829261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.840577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.840809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.840825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.371 [2024-06-07 23:29:00.852072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.371 [2024-06-07 23:29:00.852530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.371 [2024-06-07 23:29:00.852547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.863060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.863246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.863261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.873618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.873835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.873850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.884613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.884729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.884744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.895983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.896264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.896287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.907921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.908114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.908129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.919175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.919299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.919317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.930623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.930763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.930778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.940163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.940514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.940531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.948553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.948848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.948864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.957167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.957315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.957329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.965684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.965755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.973171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.973293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.973311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.977852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.977990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.978007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.985143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.985461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.985477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:00.995989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:00.996223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:00.996238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:01.007383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:01.007686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:01.007703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:01.018451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:01.018802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:01.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:01.027398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:01.027630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:01.027646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:01.035156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:01.035330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:01.035346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.372 [2024-06-07 23:29:01.044784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.372 [2024-06-07 23:29:01.045100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.372 [2024-06-07 23:29:01.045116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.634 [2024-06-07 23:29:01.055786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.634 [2024-06-07 23:29:01.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.634 [2024-06-07 23:29:01.055896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.634 [2024-06-07 23:29:01.066233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.634 [2024-06-07 23:29:01.066456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.634 [2024-06-07 23:29:01.066471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.076770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.077101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.077117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.087757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.088073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.088090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.096836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.096915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.096931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.106118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.106406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.106422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.115827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.115969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.122652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.122783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.122798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.127098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.127188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.127204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.130699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.130764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.130779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.134077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.134185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.134200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.137704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.137900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.137915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.141948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.142115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.142134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.146462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.146615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.150818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.150946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.150962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.155332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.155407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.155422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.163186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.163275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.163290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.170074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.170137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.170151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.178344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.178602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.187342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.187624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.187640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.196093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.196435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.196451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.202160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.202265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.202281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.210229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.210374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.210389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.218119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.218255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.218270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.226420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.226518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.226533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.231988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.232079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.236785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.237113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.237131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.245133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.245374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.245389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.635 [2024-06-07 23:29:01.254150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.635 [2024-06-07 23:29:01.254416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.635 [2024-06-07 23:29:01.254432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.264426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.264539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.272978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.273307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.273323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.282759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.283035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.283051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.293183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.293296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.293311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.302288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.302394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.302409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.636 [2024-06-07 23:29:01.310763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.636 [2024-06-07 23:29:01.310944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.636 [2024-06-07 23:29:01.310958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.318037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.318134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.318149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.325390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.325617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.331068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.331287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.331302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.336597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.336735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.336750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.343718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.344000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.344019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.351796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.351903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.351918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.361335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.361448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.361464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.371839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.372182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.372198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.382611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.382954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.382971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.393518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.393837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.393853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.403872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.404032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.404047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.414178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.414318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.414333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.424649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.425009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.899 [2024-06-07 23:29:01.425025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.899 [2024-06-07 23:29:01.435212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.899 [2024-06-07 23:29:01.435396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.435411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.444341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.444695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.444711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.454790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.454991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.455009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.464654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.464894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.464908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.475047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.475379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.475397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.482188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.482345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.482360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.487582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.487687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.487702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.497082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.497167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.505915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.505986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.506001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.516456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.516599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.516614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.527715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.528076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.528093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.536852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.537087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.537104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.542414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.542573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.542588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.547032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.547205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.547220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.551429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.551598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.551613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.555536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.555618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.555633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.558761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.558845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.558863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.562856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.562968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.562986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.567231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.567553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.567569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:38.900 [2024-06-07 23:29:01.574329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:38.900 [2024-06-07 23:29:01.574594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.900 [2024-06-07 23:29:01.574609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.583989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.584145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.584160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.593978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.594255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.603587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.603851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.603866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.613865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.613975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.613990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.623934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.624033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.624048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.633018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.633385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.633401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.642832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.642992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.643008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.653598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.653807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.664755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.664925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.675121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.675326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.675341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.685663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.685765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.685783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.696188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.696571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.696586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.707130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.707318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.707333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.717457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.717547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.717562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.728558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.728896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.728912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.739552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.739744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.739759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.750069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.750365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.750380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.761862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.762223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.163 [2024-06-07 23:29:01.762239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.163 [2024-06-07 23:29:01.772952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.163 [2024-06-07 23:29:01.773116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.773131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.784173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.784494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.784510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.794856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.794955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.794970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.805351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.805672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.816189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.816292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.816307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.825486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.825770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.825786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.164 [2024-06-07 23:29:01.834751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.164 [2024-06-07 23:29:01.834886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.164 [2024-06-07 23:29:01.834902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.843422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.426 [2024-06-07 23:29:01.843570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.426 [2024-06-07 23:29:01.843585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.852874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.426 [2024-06-07 23:29:01.852954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.426 [2024-06-07 23:29:01.852969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.861709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.426 [2024-06-07 23:29:01.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.426 [2024-06-07 23:29:01.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.870495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.426 [2024-06-07 23:29:01.870617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.426 [2024-06-07 23:29:01.870632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.878566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.426 [2024-06-07 23:29:01.878818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.426 [2024-06-07 23:29:01.878833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.426 [2024-06-07 23:29:01.884557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.884848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.888966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.889221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.889238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.893936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.894123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.894144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.898682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.898830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.898845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.903333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.903606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.903622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.911324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.911567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.911582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.919788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.920002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.920020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.925987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.926064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.926080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.929406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.929549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.929564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.933018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.933132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.933147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.936532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.936827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.936850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.940015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.940121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.940137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.944534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.944673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.951799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.951985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.956031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.956139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.956155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.959974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.960067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.960085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.963934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.964205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.964220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.972109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.972208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.972223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.980410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.980640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.980655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.988494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.988762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.988777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:01.996859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:01.997161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:01.997180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.005365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.005467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.005482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.011675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.011784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.011799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.020796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.020909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.020924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.029253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.029506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.029521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.037290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.037600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.043550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.427 [2024-06-07 23:29:02.043674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.427 [2024-06-07 23:29:02.047840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.427 [2024-06-07 23:29:02.048172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.048188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.056005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.056185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.056200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.062666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.062804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.062819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.071026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.071446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.071462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.077746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.077883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.086652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.086919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.086945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.094482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.094783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.428 [2024-06-07 23:29:02.104399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.428 [2024-06-07 23:29:02.104535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.428 [2024-06-07 23:29:02.104550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.114280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.114637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.114653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.123201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.123342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.123357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.133611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.133764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.133779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.143642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.143924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.143940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.152687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.152830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.164055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.164390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.164406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.172315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.172451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.172469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.177918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.178091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.690 [2024-06-07 23:29:02.178109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.690 [2024-06-07 23:29:02.183118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.690 [2024-06-07 23:29:02.183260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.183279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.192357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.192632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.192648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.200215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.200479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.200495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.207660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.207835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.207850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.214884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.215278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.215294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.222831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.222930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.222945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.231715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.232099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.232115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.238771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.239106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.239122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.247389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.247717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.247733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.256312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.256527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.256546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.266661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.266836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.266854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.276863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.277169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.277185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.287704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.287788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.287804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.298057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.298197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.298212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.307517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.307770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.307785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.318198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.318303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.318323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.328795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.329093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.329109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.339964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.340357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.351598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.351796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.351811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.691 [2024-06-07 23:29:02.361891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.691 [2024-06-07 23:29:02.362182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.691 [2024-06-07 23:29:02.362198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.373002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.373082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.373097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.384506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.384768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.394967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.395349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.395365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.405138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.405231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.405253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.416082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.416258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.416274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.427784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.428117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.428132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.438800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.438974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.438989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.449806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.450027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.450045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.953 [2024-06-07 23:29:02.460878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.953 [2024-06-07 23:29:02.461144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.953 [2024-06-07 23:29:02.461159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.471990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.472314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.482361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.482604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.492179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.492316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.501659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.501814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.501829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.510214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.510317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.954 [2024-06-07 23:29:02.519785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10b04f0) with pdu=0x2000190fef90 00:32:39.954 [2024-06-07 23:29:02.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.954 [2024-06-07 23:29:02.519901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.954 00:32:39.954 Latency(us) 00:32:39.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:39.954 nvme0n1 : 2.00 3537.81 442.23 0.00 0.00 4515.03 1522.35 13161.81 00:32:39.954 =================================================================================================================== 00:32:39.954 Total : 3537.81 442.23 0.00 0.00 4515.03 1522.35 13161.81 00:32:39.954 0 00:32:39.954 23:29:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:39.954 23:29:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:39.954 | .driver_specific 00:32:39.954 | .nvme_error 00:32:39.954 | .status_code 00:32:39.954 | .command_transient_transport_error' 00:32:39.954 23:29:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:39.954 23:29:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:40.216 23:29:02 -- host/digest.sh@71 -- # (( 228 > 0 )) 00:32:40.216 23:29:02 -- host/digest.sh@73 -- # killprocess 3044512 00:32:40.216 23:29:02 -- common/autotest_common.sh@926 -- # '[' -z 3044512 ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@930 -- # kill -0 3044512 00:32:40.216 23:29:02 -- common/autotest_common.sh@931 -- # uname 00:32:40.216 23:29:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3044512 00:32:40.216 23:29:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:40.216 23:29:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3044512' 00:32:40.216 killing process with pid 3044512 00:32:40.216 23:29:02 -- common/autotest_common.sh@945 -- # kill 3044512 00:32:40.216 Received shutdown signal, test time was about 2.000000 seconds 00:32:40.216 00:32:40.216 Latency(us) 00:32:40.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.216 =================================================================================================================== 00:32:40.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:40.216 23:29:02 -- common/autotest_common.sh@950 -- # wait 3044512 00:32:40.216 23:29:02 -- host/digest.sh@115 -- # killprocess 3042075 00:32:40.216 23:29:02 -- common/autotest_common.sh@926 -- # '[' -z 3042075 ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@930 -- # kill -0 3042075 00:32:40.216 23:29:02 -- common/autotest_common.sh@931 -- # uname 00:32:40.216 23:29:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3042075 00:32:40.216 23:29:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:40.216 23:29:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:40.216 23:29:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3042075' 00:32:40.216 killing process with pid 3042075 00:32:40.216 23:29:02 -- common/autotest_common.sh@945 -- # kill 3042075 00:32:40.216 23:29:02 -- common/autotest_common.sh@950 -- # wait 3042075 00:32:40.476 00:32:40.476 real 0m15.839s 00:32:40.476 user 0m30.583s 00:32:40.476 sys 0m3.423s 00:32:40.476 23:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.476 23:29:02 -- common/autotest_common.sh@10 -- # set +x 00:32:40.476 ************************************ 00:32:40.476 END TEST nvmf_digest_error 00:32:40.476 ************************************ 00:32:40.476 23:29:03 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:40.476 23:29:03 -- host/digest.sh@139 -- # nvmftestfini 00:32:40.476 23:29:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:40.476 23:29:03 -- nvmf/common.sh@116 -- # sync 00:32:40.476 23:29:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:40.476 23:29:03 -- nvmf/common.sh@119 -- # set +e 00:32:40.476 23:29:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:40.476 23:29:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:40.476 rmmod nvme_tcp 00:32:40.476 rmmod nvme_fabrics 00:32:40.476 rmmod nvme_keyring 00:32:40.476 23:29:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:40.476 23:29:03 -- nvmf/common.sh@123 -- # set -e 00:32:40.476 23:29:03 -- nvmf/common.sh@124 -- # return 0 00:32:40.476 23:29:03 -- nvmf/common.sh@477 -- # '[' -n 3042075 ']' 00:32:40.476 23:29:03 -- nvmf/common.sh@478 -- # killprocess 3042075 00:32:40.476 23:29:03 -- common/autotest_common.sh@926 -- # '[' -z 3042075 ']' 00:32:40.476 23:29:03 -- common/autotest_common.sh@930 -- # kill -0 3042075 00:32:40.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3042075) - No such process 00:32:40.476 23:29:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3042075 is not found' 00:32:40.476 Process with pid 3042075 is not found 00:32:40.476 23:29:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:40.476 23:29:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:40.476 23:29:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:40.476 23:29:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:40.476 23:29:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:40.476 23:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.476 23:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:40.476 23:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.023 23:29:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:43.023 00:32:43.023 real 0m41.492s 00:32:43.023 user 1m3.812s 00:32:43.023 sys 0m12.291s 00:32:43.023 23:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.023 23:29:05 -- common/autotest_common.sh@10 -- # set +x 00:32:43.023 ************************************ 00:32:43.023 END TEST nvmf_digest 00:32:43.023 ************************************ 00:32:43.023 23:29:05 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:32:43.023 23:29:05 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:32:43.023 23:29:05 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:32:43.023 23:29:05 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:43.023 23:29:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:43.023 23:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:43.023 23:29:05 -- common/autotest_common.sh@10 -- # set +x 00:32:43.023 ************************************ 00:32:43.023 START TEST nvmf_bdevperf 00:32:43.023 ************************************ 00:32:43.023 23:29:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:43.023 * Looking for test storage... 00:32:43.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:43.023 23:29:05 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.023 23:29:05 -- nvmf/common.sh@7 -- # uname -s 00:32:43.023 23:29:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.023 23:29:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.023 23:29:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.023 23:29:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.023 23:29:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.023 23:29:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.023 23:29:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.023 23:29:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.023 23:29:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.023 23:29:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.023 23:29:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.023 23:29:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.023 23:29:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.023 23:29:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.023 23:29:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.023 23:29:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.023 23:29:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.023 23:29:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.023 23:29:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.023 23:29:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.023 23:29:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.023 23:29:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.023 23:29:05 -- paths/export.sh@5 -- # export PATH 00:32:43.023 23:29:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.023 23:29:05 -- nvmf/common.sh@46 -- # : 0 00:32:43.023 23:29:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:43.023 23:29:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:43.023 23:29:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:43.023 23:29:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.023 23:29:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.023 23:29:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:43.023 23:29:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:43.023 23:29:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:43.023 23:29:05 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:43.023 23:29:05 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:43.023 23:29:05 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:43.023 23:29:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:43.023 23:29:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.023 23:29:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:43.023 23:29:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:43.023 23:29:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:43.023 23:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.023 23:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:43.023 23:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.023 23:29:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:43.023 23:29:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:43.023 23:29:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:43.023 23:29:05 -- common/autotest_common.sh@10 -- # set +x 00:32:49.617 23:29:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:49.617 23:29:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:49.617 23:29:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:49.617 23:29:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:49.617 23:29:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:49.617 23:29:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:49.617 23:29:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:49.617 23:29:12 -- nvmf/common.sh@294 -- # net_devs=() 00:32:49.617 23:29:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:49.617 23:29:12 -- nvmf/common.sh@295 -- # e810=() 00:32:49.617 23:29:12 -- nvmf/common.sh@295 -- # local -ga e810 00:32:49.617 23:29:12 -- nvmf/common.sh@296 -- # x722=() 00:32:49.617 23:29:12 -- nvmf/common.sh@296 -- # local -ga x722 00:32:49.617 23:29:12 -- nvmf/common.sh@297 -- # mlx=() 00:32:49.617 23:29:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:49.617 23:29:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.617 23:29:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:49.617 23:29:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:49.617 23:29:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:49.617 23:29:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:49.617 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:49.617 23:29:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:49.617 23:29:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:49.617 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:49.617 23:29:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:49.617 23:29:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.617 23:29:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.617 23:29:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:49.617 Found net devices under 0000:31:00.0: cvl_0_0 00:32:49.617 23:29:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.617 23:29:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:49.617 23:29:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.617 23:29:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.617 23:29:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:49.617 Found net devices under 0000:31:00.1: cvl_0_1 00:32:49.617 23:29:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.617 23:29:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:49.617 23:29:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:49.617 23:29:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:49.617 23:29:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.617 23:29:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.617 23:29:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.617 23:29:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:49.617 23:29:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.617 23:29:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.617 23:29:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:49.617 23:29:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.617 23:29:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.617 23:29:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:49.617 23:29:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:49.617 23:29:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.617 23:29:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.878 23:29:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.878 23:29:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.878 23:29:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:49.878 23:29:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.878 23:29:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.878 23:29:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.878 23:29:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:49.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:32:49.879 00:32:49.879 --- 10.0.0.2 ping statistics --- 00:32:49.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.879 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:32:49.879 23:29:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:32:49.879 00:32:49.879 --- 10.0.0.1 ping statistics --- 00:32:49.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.879 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:32:49.879 23:29:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.879 23:29:12 -- nvmf/common.sh@410 -- # return 0 00:32:49.879 23:29:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:49.879 23:29:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.879 23:29:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:49.879 23:29:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:49.879 23:29:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.879 23:29:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:49.879 23:29:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:49.879 23:29:12 -- host/bdevperf.sh@25 -- # tgt_init 00:32:49.879 23:29:12 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:49.879 23:29:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:49.879 23:29:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:49.879 23:29:12 -- common/autotest_common.sh@10 -- # set +x 00:32:49.879 23:29:12 -- nvmf/common.sh@469 -- # nvmfpid=3049331 00:32:49.879 23:29:12 -- nvmf/common.sh@470 -- # waitforlisten 3049331 00:32:49.879 23:29:12 -- common/autotest_common.sh@819 -- # '[' -z 3049331 ']' 00:32:49.879 23:29:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:49.879 23:29:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.879 23:29:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:49.879 23:29:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.879 23:29:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:49.879 23:29:12 -- common/autotest_common.sh@10 -- # set +x 00:32:50.140 [2024-06-07 23:29:12.594641] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:50.140 [2024-06-07 23:29:12.594710] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.140 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.140 [2024-06-07 23:29:12.683670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:50.140 [2024-06-07 23:29:12.729815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:50.140 [2024-06-07 23:29:12.729982] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.140 [2024-06-07 23:29:12.729993] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.140 [2024-06-07 23:29:12.730002] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.140 [2024-06-07 23:29:12.730176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.140 [2024-06-07 23:29:12.730288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.141 [2024-06-07 23:29:12.730323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.710 23:29:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:50.710 23:29:13 -- common/autotest_common.sh@852 -- # return 0 00:32:50.710 23:29:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:50.710 23:29:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:50.710 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 23:29:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.970 23:29:13 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:50.970 23:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.970 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 [2024-06-07 23:29:13.410229] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.970 23:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.970 23:29:13 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:50.970 23:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.970 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 Malloc0 00:32:50.970 23:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.970 23:29:13 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:50.970 23:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.970 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 23:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.970 23:29:13 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:50.970 23:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.970 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 23:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.970 23:29:13 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:50.970 23:29:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.970 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:50.970 [2024-06-07 23:29:13.474672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.970 23:29:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.970 23:29:13 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:50.970 23:29:13 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:50.970 23:29:13 -- nvmf/common.sh@520 -- # config=() 00:32:50.970 23:29:13 -- nvmf/common.sh@520 -- # local subsystem config 00:32:50.970 23:29:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:50.970 23:29:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:50.970 { 00:32:50.970 "params": { 00:32:50.970 "name": "Nvme$subsystem", 00:32:50.970 "trtype": "$TEST_TRANSPORT", 00:32:50.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:50.970 "adrfam": "ipv4", 00:32:50.970 "trsvcid": "$NVMF_PORT", 00:32:50.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:50.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:50.970 "hdgst": ${hdgst:-false}, 00:32:50.970 "ddgst": ${ddgst:-false} 00:32:50.970 }, 00:32:50.970 "method": "bdev_nvme_attach_controller" 00:32:50.970 } 00:32:50.970 EOF 00:32:50.970 )") 00:32:50.970 23:29:13 -- nvmf/common.sh@542 -- # cat 00:32:50.970 23:29:13 -- nvmf/common.sh@544 -- # jq . 00:32:50.970 23:29:13 -- nvmf/common.sh@545 -- # IFS=, 00:32:50.970 23:29:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:50.970 "params": { 00:32:50.970 "name": "Nvme1", 00:32:50.970 "trtype": "tcp", 00:32:50.970 "traddr": "10.0.0.2", 00:32:50.970 "adrfam": "ipv4", 00:32:50.970 "trsvcid": "4420", 00:32:50.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:50.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:50.970 "hdgst": false, 00:32:50.970 "ddgst": false 00:32:50.970 }, 00:32:50.970 "method": "bdev_nvme_attach_controller" 00:32:50.970 }' 00:32:50.970 [2024-06-07 23:29:13.525739] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:50.970 [2024-06-07 23:29:13.525785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049643 ] 00:32:50.970 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.970 [2024-06-07 23:29:13.584023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.970 [2024-06-07 23:29:13.612743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.230 Running I/O for 1 seconds... 00:32:52.226 00:32:52.226 Latency(us) 00:32:52.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.226 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.226 Verification LBA range: start 0x0 length 0x4000 00:32:52.226 Nvme1n1 : 1.01 13941.83 54.46 0.00 0.00 9137.63 1146.88 16711.68 00:32:52.226 =================================================================================================================== 00:32:52.226 Total : 13941.83 54.46 0.00 0.00 9137.63 1146.88 16711.68 00:32:52.226 23:29:14 -- host/bdevperf.sh@30 -- # bdevperfpid=3049971 00:32:52.226 23:29:14 -- host/bdevperf.sh@32 -- # sleep 3 00:32:52.226 23:29:14 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:52.226 23:29:14 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:52.226 23:29:14 -- nvmf/common.sh@520 -- # config=() 00:32:52.226 23:29:14 -- nvmf/common.sh@520 -- # local subsystem config 00:32:52.226 23:29:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.226 23:29:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.226 { 00:32:52.226 "params": { 00:32:52.226 "name": "Nvme$subsystem", 00:32:52.226 "trtype": "$TEST_TRANSPORT", 00:32:52.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.226 "adrfam": "ipv4", 00:32:52.226 "trsvcid": "$NVMF_PORT", 00:32:52.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.226 "hdgst": ${hdgst:-false}, 00:32:52.226 "ddgst": ${ddgst:-false} 00:32:52.226 }, 00:32:52.226 "method": "bdev_nvme_attach_controller" 00:32:52.226 } 00:32:52.226 EOF 00:32:52.226 )") 00:32:52.226 23:29:14 -- nvmf/common.sh@542 -- # cat 00:32:52.226 23:29:14 -- nvmf/common.sh@544 -- # jq . 00:32:52.226 23:29:14 -- nvmf/common.sh@545 -- # IFS=, 00:32:52.226 23:29:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:52.226 "params": { 00:32:52.226 "name": "Nvme1", 00:32:52.226 "trtype": "tcp", 00:32:52.226 "traddr": "10.0.0.2", 00:32:52.226 "adrfam": "ipv4", 00:32:52.226 "trsvcid": "4420", 00:32:52.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.226 "hdgst": false, 00:32:52.226 "ddgst": false 00:32:52.226 }, 00:32:52.226 "method": "bdev_nvme_attach_controller" 00:32:52.226 }' 00:32:52.496 [2024-06-07 23:29:14.918255] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:52.496 [2024-06-07 23:29:14.918308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049971 ] 00:32:52.496 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.496 [2024-06-07 23:29:14.977956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.496 [2024-06-07 23:29:15.005154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.756 Running I/O for 15 seconds... 00:32:55.307 23:29:17 -- host/bdevperf.sh@33 -- # kill -9 3049331 00:32:55.307 23:29:17 -- host/bdevperf.sh@35 -- # sleep 3 00:32:55.307 [2024-06-07 23:29:17.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.886979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.886991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.887000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.887017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.887040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.307 [2024-06-07 23:29:17.887058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.307 [2024-06-07 23:29:17.887077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.307 [2024-06-07 23:29:17.887093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.307 [2024-06-07 23:29:17.887109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.307 [2024-06-07 23:29:17.887126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.307 [2024-06-07 23:29:17.887135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.307 [2024-06-07 23:29:17.887141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.308 [2024-06-07 23:29:17.887682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.308 [2024-06-07 23:29:17.887758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.308 [2024-06-07 23:29:17.887765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.887813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.887829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.887911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.887943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.887984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.887992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.309 [2024-06-07 23:29:17.888377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.309 [2024-06-07 23:29:17.888491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.309 [2024-06-07 23:29:17.888500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.310 [2024-06-07 23:29:17.888684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.310 [2024-06-07 23:29:17.888812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56560 is same with the state(5) to be set 00:32:55.310 [2024-06-07 23:29:17.888830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:55.310 [2024-06-07 23:29:17.888835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:55.310 [2024-06-07 23:29:17.888843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21848 len:8 PRP1 0x0 PRP2 0x0 00:32:55.310 [2024-06-07 23:29:17.888851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888890] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b56560 was disconnected and freed. reset controller. 00:32:55.310 [2024-06-07 23:29:17.888934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.310 [2024-06-07 23:29:17.888944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.310 [2024-06-07 23:29:17.888959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.310 [2024-06-07 23:29:17.888974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:55.310 [2024-06-07 23:29:17.888989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.310 [2024-06-07 23:29:17.888996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.310 [2024-06-07 23:29:17.891367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.310 [2024-06-07 23:29:17.891385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.310 [2024-06-07 23:29:17.891906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.892298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.892319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.310 [2024-06-07 23:29:17.892328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.310 [2024-06-07 23:29:17.892462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.310 [2024-06-07 23:29:17.892569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.310 [2024-06-07 23:29:17.892577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.310 [2024-06-07 23:29:17.892586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.310 [2024-06-07 23:29:17.894726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.310 [2024-06-07 23:29:17.904046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.310 [2024-06-07 23:29:17.904625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.904959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.904968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.310 [2024-06-07 23:29:17.904976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.310 [2024-06-07 23:29:17.905137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.310 [2024-06-07 23:29:17.905284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.310 [2024-06-07 23:29:17.905293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.310 [2024-06-07 23:29:17.905300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.310 [2024-06-07 23:29:17.907622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.310 [2024-06-07 23:29:17.916612] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.310 [2024-06-07 23:29:17.917147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.917504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.310 [2024-06-07 23:29:17.917514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.310 [2024-06-07 23:29:17.917522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.310 [2024-06-07 23:29:17.917646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.310 [2024-06-07 23:29:17.917769] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.310 [2024-06-07 23:29:17.917776] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.310 [2024-06-07 23:29:17.917783] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.310 [2024-06-07 23:29:17.919980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.311 [2024-06-07 23:29:17.928825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.311 [2024-06-07 23:29:17.929358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.929794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.929806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.311 [2024-06-07 23:29:17.929816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.311 [2024-06-07 23:29:17.929959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.311 [2024-06-07 23:29:17.930105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.311 [2024-06-07 23:29:17.930113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.311 [2024-06-07 23:29:17.930120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.311 [2024-06-07 23:29:17.932457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.311 [2024-06-07 23:29:17.941400] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.311 [2024-06-07 23:29:17.941945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.942295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.942306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.311 [2024-06-07 23:29:17.942314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.311 [2024-06-07 23:29:17.942438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.311 [2024-06-07 23:29:17.942597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.311 [2024-06-07 23:29:17.942605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.311 [2024-06-07 23:29:17.942612] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.311 [2024-06-07 23:29:17.944844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.311 [2024-06-07 23:29:17.953785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.311 [2024-06-07 23:29:17.954275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.954635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.954645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.311 [2024-06-07 23:29:17.954652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.311 [2024-06-07 23:29:17.954830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.311 [2024-06-07 23:29:17.954918] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.311 [2024-06-07 23:29:17.954925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.311 [2024-06-07 23:29:17.954932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.311 [2024-06-07 23:29:17.957279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.311 [2024-06-07 23:29:17.966305] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.311 [2024-06-07 23:29:17.966887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.967254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.967268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.311 [2024-06-07 23:29:17.967277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.311 [2024-06-07 23:29:17.967456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.311 [2024-06-07 23:29:17.967583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.311 [2024-06-07 23:29:17.967591] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.311 [2024-06-07 23:29:17.967599] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.311 [2024-06-07 23:29:17.969784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.311 [2024-06-07 23:29:17.978803] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.311 [2024-06-07 23:29:17.979340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.979718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.311 [2024-06-07 23:29:17.979728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.311 [2024-06-07 23:29:17.979736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.311 [2024-06-07 23:29:17.979896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.311 [2024-06-07 23:29:17.980056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.311 [2024-06-07 23:29:17.980063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.311 [2024-06-07 23:29:17.980070] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.311 [2024-06-07 23:29:17.982292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:17.991321] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:17.991864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:17.992207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:17.992216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:17.992223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:17.992333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:17.992457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:17.992464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:17.992471] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.574 [2024-06-07 23:29:17.994561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:18.003905] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:18.004470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.004836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.004850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:18.004859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:18.005020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:18.005147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:18.005155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:18.005163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.574 [2024-06-07 23:29:18.007394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:18.016282] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:18.016836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.017137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.017151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:18.017159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:18.017363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:18.017510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:18.017518] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:18.017525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.574 [2024-06-07 23:29:18.019796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:18.028795] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:18.029322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.029680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.029689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:18.029697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:18.029820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:18.029980] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:18.029988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:18.029995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.574 [2024-06-07 23:29:18.032397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:18.041237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:18.041706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.042089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.042098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:18.042105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:18.042252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:18.042394] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:18.042402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:18.042409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.574 [2024-06-07 23:29:18.044516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.574 [2024-06-07 23:29:18.053707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.574 [2024-06-07 23:29:18.054207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.054585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.574 [2024-06-07 23:29:18.054595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.574 [2024-06-07 23:29:18.054606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.574 [2024-06-07 23:29:18.054766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.574 [2024-06-07 23:29:18.054925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.574 [2024-06-07 23:29:18.054933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.574 [2024-06-07 23:29:18.054940] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.057437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.066236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.066727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.067065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.067074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.067081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.067204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.067423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.067431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.067438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.069546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.078710] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.079193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.079413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.079425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.079433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.079576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.079718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.079726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.079733] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.081966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.091138] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.091749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.092104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.092117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.092126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.092263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.092445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.092454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.092461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.094827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.103588] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.104205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.104570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.104584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.104593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.104773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.104881] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.104889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.104897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.107122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.115983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.116518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.116861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.116870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.116878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.117001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.117143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.117150] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.117157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.119270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.128569] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.129060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.129397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.129407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.129414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.129538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.129666] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.129674] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.129680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.132006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.141031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.141527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.141904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.141917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.141927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.142088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.142215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.142223] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.142230] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.144364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.153425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.153828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.154168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.154178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.154186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.154333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.154475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.154484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.154490] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.156724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.166013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.166486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.166856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.166869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.575 [2024-06-07 23:29:18.166878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.575 [2024-06-07 23:29:18.167057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.575 [2024-06-07 23:29:18.167220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.575 [2024-06-07 23:29:18.167232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.575 [2024-06-07 23:29:18.167240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.575 [2024-06-07 23:29:18.169651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.575 [2024-06-07 23:29:18.178517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.575 [2024-06-07 23:29:18.178945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.575 [2024-06-07 23:29:18.179286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.179296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.179304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.179428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.179552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.179560] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.179567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.181640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.190759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.576 [2024-06-07 23:29:18.191247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.191593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.191602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.191610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.191751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.191875] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.191882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.191888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.194141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.203372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.576 [2024-06-07 23:29:18.203862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.204197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.204206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.204214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.204377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.204519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.204527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.204537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.206949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.215826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.576 [2024-06-07 23:29:18.216372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.216743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.216755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.216765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.216925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.217108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.217116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.217123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.219531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.228372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.576 [2024-06-07 23:29:18.228890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.229304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.229315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.229323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.229484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.229590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.229598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.229605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.231873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.240971] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.576 [2024-06-07 23:29:18.241500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.241868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.576 [2024-06-07 23:29:18.241877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.576 [2024-06-07 23:29:18.241884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.576 [2024-06-07 23:29:18.242026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.576 [2024-06-07 23:29:18.242150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.576 [2024-06-07 23:29:18.242157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.576 [2024-06-07 23:29:18.242164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.576 [2024-06-07 23:29:18.244546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.576 [2024-06-07 23:29:18.253475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.253961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.254304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.254314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.254321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.254462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.254586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.254593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.254600] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.257050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.265734] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.266335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.266692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.266704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.266714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.266875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.267038] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.267046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.267053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.269333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.278107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.278730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.279127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.279140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.279149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.279317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.279463] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.279471] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.279478] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.281823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.290520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.291015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.291386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.291399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.291408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.291587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.291786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.291794] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.291802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.294136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.303056] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.303699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.304058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.304071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.304080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.304222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.304356] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.304365] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.304372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.306718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.315634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.316265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.316589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.316601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.316611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.316790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.316881] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.316889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.316897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.319045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.328010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.328608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.328998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.329008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.329016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.329194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.329322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.329330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.329337] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.331442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.340382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.340757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.341098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.341107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.840 [2024-06-07 23:29:18.341114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.840 [2024-06-07 23:29:18.341219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.840 [2024-06-07 23:29:18.341366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.840 [2024-06-07 23:29:18.341374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.840 [2024-06-07 23:29:18.341380] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.840 [2024-06-07 23:29:18.343847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.840 [2024-06-07 23:29:18.352726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.840 [2024-06-07 23:29:18.353216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.353608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.840 [2024-06-07 23:29:18.353621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.353630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.353810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.353973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.353981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.353988] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.356412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.364974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.365396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.365634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.365648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.365656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.365780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.365959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.365966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.365973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.368262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.377426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.377982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.378448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.378463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.378472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.378634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.378779] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.378787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.378795] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.380965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.389892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.390454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.390846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.390855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.390863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.391059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.391165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.391173] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.391179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.393399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.402405] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.402762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.403135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.403144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.403156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.403284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.403445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.403452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.403459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.405654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.414675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.415199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.415592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.415602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.415609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.415696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.415856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.415863] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.415870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.417975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.427240] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.427740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.428031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.428041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.428048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.428248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.428391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.428398] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.428405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.430744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.439626] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.440149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.440491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.440501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.440508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.440671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.440812] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.440820] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.440826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.443078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.451948] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.452569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.452933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.452946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.452955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.841 [2024-06-07 23:29:18.453171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.841 [2024-06-07 23:29:18.453323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.841 [2024-06-07 23:29:18.453333] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.841 [2024-06-07 23:29:18.453341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.841 [2024-06-07 23:29:18.455397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.841 [2024-06-07 23:29:18.464451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.841 [2024-06-07 23:29:18.464938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.465361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.841 [2024-06-07 23:29:18.465372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.841 [2024-06-07 23:29:18.465379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.842 [2024-06-07 23:29:18.465539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.842 [2024-06-07 23:29:18.465663] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.842 [2024-06-07 23:29:18.465671] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.842 [2024-06-07 23:29:18.465678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.842 [2024-06-07 23:29:18.467982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.842 [2024-06-07 23:29:18.476976] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.842 [2024-06-07 23:29:18.477498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.477838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.477847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.842 [2024-06-07 23:29:18.477854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.842 [2024-06-07 23:29:18.477996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.842 [2024-06-07 23:29:18.478110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.842 [2024-06-07 23:29:18.478118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.842 [2024-06-07 23:29:18.478124] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.842 [2024-06-07 23:29:18.480412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.842 [2024-06-07 23:29:18.489610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.842 [2024-06-07 23:29:18.490033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.490404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.490414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.842 [2024-06-07 23:29:18.490421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.842 [2024-06-07 23:29:18.490599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.842 [2024-06-07 23:29:18.490759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.842 [2024-06-07 23:29:18.490766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.842 [2024-06-07 23:29:18.490773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.842 [2024-06-07 23:29:18.493002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.842 [2024-06-07 23:29:18.502277] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.842 [2024-06-07 23:29:18.502768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.503116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.503126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.842 [2024-06-07 23:29:18.503133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.842 [2024-06-07 23:29:18.503318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.842 [2024-06-07 23:29:18.503424] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.842 [2024-06-07 23:29:18.503432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.842 [2024-06-07 23:29:18.503439] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.842 [2024-06-07 23:29:18.505608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.842 [2024-06-07 23:29:18.514614] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.842 [2024-06-07 23:29:18.515127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.515521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.842 [2024-06-07 23:29:18.515536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:55.842 [2024-06-07 23:29:18.515545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:55.842 [2024-06-07 23:29:18.515688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:55.842 [2024-06-07 23:29:18.515852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.842 [2024-06-07 23:29:18.515864] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.842 [2024-06-07 23:29:18.515872] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.842 [2024-06-07 23:29:18.518042] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.104 [2024-06-07 23:29:18.526891] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.104 [2024-06-07 23:29:18.527313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.104 [2024-06-07 23:29:18.527712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.104 [2024-06-07 23:29:18.527724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.104 [2024-06-07 23:29:18.527733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.104 [2024-06-07 23:29:18.527894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.104 [2024-06-07 23:29:18.527985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.104 [2024-06-07 23:29:18.527992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.104 [2024-06-07 23:29:18.528000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.104 [2024-06-07 23:29:18.530246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.104 [2024-06-07 23:29:18.539415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.104 [2024-06-07 23:29:18.539953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.104 [2024-06-07 23:29:18.540289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.104 [2024-06-07 23:29:18.540299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.540307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.540449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.540555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.540563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.540570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.542801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.552018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.552499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.552873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.552882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.552890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.553014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.553155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.553163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.553174] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.555535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.564441] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.564945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.565162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.565176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.565184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.565333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.565476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.565484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.565490] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.567795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.576986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.577579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.577948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.577961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.577970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.578131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.578301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.578310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.578317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.580553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.589458] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.589940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.590277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.590288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.590296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.590475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.590634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.590642] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.590649] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.592924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.601992] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.602604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.602966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.602979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.602988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.603130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.603302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.603311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.603319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.605520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.614382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.615012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.615392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.615407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.615416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.615595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.615758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.615766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.615773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.617995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.626902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.627496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.627862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.627874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.627883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.628044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.628153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.628161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.105 [2024-06-07 23:29:18.628168] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.105 [2024-06-07 23:29:18.630395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.105 [2024-06-07 23:29:18.639490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.105 [2024-06-07 23:29:18.640106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.640557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.105 [2024-06-07 23:29:18.640571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.105 [2024-06-07 23:29:18.640581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.105 [2024-06-07 23:29:18.640741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.105 [2024-06-07 23:29:18.640904] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.105 [2024-06-07 23:29:18.640913] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.640920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.643157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.651944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.652440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.652781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.652790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.652798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.652976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.653136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.653144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.653151] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.655439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.664454] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.665039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.665380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.665396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.665405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.665566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.665675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.665682] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.665690] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.668060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.676886] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.677458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.677818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.677831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.677841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.678020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.678147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.678155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.678162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.680372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.689311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.689774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.690154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.690168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.690177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.690345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.690472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.690480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.690487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.692779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.701727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.702356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.702727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.702740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.702749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.702873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.703037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.703045] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.703052] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.705314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.714226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.714702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.715043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.715057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.715064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.715189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.715371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.715379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.715386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.717530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.726764] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.727233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.727620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.727633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.727642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.727802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.727966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.727974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.727981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.730096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.739309] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.739910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.740228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.740240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.740258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.740438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.106 [2024-06-07 23:29:18.740601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.106 [2024-06-07 23:29:18.740609] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.106 [2024-06-07 23:29:18.740617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.106 [2024-06-07 23:29:18.743000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.106 [2024-06-07 23:29:18.751869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.106 [2024-06-07 23:29:18.752523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.752884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.106 [2024-06-07 23:29:18.752896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.106 [2024-06-07 23:29:18.752910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.106 [2024-06-07 23:29:18.753107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.107 [2024-06-07 23:29:18.753261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.107 [2024-06-07 23:29:18.753270] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.107 [2024-06-07 23:29:18.753278] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.107 [2024-06-07 23:29:18.755316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.107 [2024-06-07 23:29:18.764446] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.107 [2024-06-07 23:29:18.765044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.107 [2024-06-07 23:29:18.765416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.107 [2024-06-07 23:29:18.765431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.107 [2024-06-07 23:29:18.765440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.107 [2024-06-07 23:29:18.765601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.107 [2024-06-07 23:29:18.765728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.107 [2024-06-07 23:29:18.765736] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.107 [2024-06-07 23:29:18.765743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.107 [2024-06-07 23:29:18.767892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.107 [2024-06-07 23:29:18.776938] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.107 [2024-06-07 23:29:18.777557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.107 [2024-06-07 23:29:18.777922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.107 [2024-06-07 23:29:18.777935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.107 [2024-06-07 23:29:18.777944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.107 [2024-06-07 23:29:18.778105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.107 [2024-06-07 23:29:18.778313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.107 [2024-06-07 23:29:18.778322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.107 [2024-06-07 23:29:18.778329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.107 [2024-06-07 23:29:18.780640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.369 [2024-06-07 23:29:18.789343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.369 [2024-06-07 23:29:18.789950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.790316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.790331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.369 [2024-06-07 23:29:18.790340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.369 [2024-06-07 23:29:18.790505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.369 [2024-06-07 23:29:18.790632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.369 [2024-06-07 23:29:18.790641] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.369 [2024-06-07 23:29:18.790648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.369 [2024-06-07 23:29:18.792833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.369 [2024-06-07 23:29:18.801668] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.369 [2024-06-07 23:29:18.802341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.802708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.802720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.369 [2024-06-07 23:29:18.802730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.369 [2024-06-07 23:29:18.802891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.369 [2024-06-07 23:29:18.803036] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.369 [2024-06-07 23:29:18.803044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.369 [2024-06-07 23:29:18.803051] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.369 [2024-06-07 23:29:18.805387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.369 [2024-06-07 23:29:18.813970] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.369 [2024-06-07 23:29:18.814547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.814910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.814922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.369 [2024-06-07 23:29:18.814931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.369 [2024-06-07 23:29:18.815074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.369 [2024-06-07 23:29:18.815237] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.369 [2024-06-07 23:29:18.815257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.369 [2024-06-07 23:29:18.815265] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.369 [2024-06-07 23:29:18.817594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.369 [2024-06-07 23:29:18.826538] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.369 [2024-06-07 23:29:18.827147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.827528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.369 [2024-06-07 23:29:18.827542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.369 [2024-06-07 23:29:18.827552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.369 [2024-06-07 23:29:18.827731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.827880] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.827888] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.827896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.830173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.839120] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.839736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.840100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.840113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.840121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.840290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.840417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.840425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.840433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.842652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.851467] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.852083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.852468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.852482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.852491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.852598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.852724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.852732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.852739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.855016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.863884] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.864551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.864915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.864928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.864938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.865098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.865207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.865219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.865227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.867471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.876370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.876955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.877295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.877310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.877319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.877425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.877533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.877541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.877549] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.879843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.888786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.889283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.889562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.889576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.889586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.889711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.889874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.889883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.889890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.892132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.901198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.901738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.902164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.902176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.902186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.902373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.902519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.902527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.902538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.904506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.913532] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.914110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.914496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.914510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.914520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.914681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.914844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.914852] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.914859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.917246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.926083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.926444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.926786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.926796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.926803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.926964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.370 [2024-06-07 23:29:18.927105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.370 [2024-06-07 23:29:18.927113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.370 [2024-06-07 23:29:18.927120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.370 [2024-06-07 23:29:18.929341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.370 [2024-06-07 23:29:18.938408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.370 [2024-06-07 23:29:18.939010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.939296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.370 [2024-06-07 23:29:18.939316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.370 [2024-06-07 23:29:18.939325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.370 [2024-06-07 23:29:18.939486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:18.939649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:18.939657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:18.939665] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:18.941692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:18.950861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:18.951543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.951947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.951959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:18.951968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:18.952148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:18.952238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:18.952340] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:18.952348] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:18.954789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:18.963330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:18.963936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.964309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.964324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:18.964333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:18.964495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:18.964658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:18.964665] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:18.964672] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:18.966860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:18.975857] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:18.976490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.976853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.976866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:18.976875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:18.977018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:18.977181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:18.977189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:18.977197] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:18.979586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:18.988377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:18.988988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.989355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:18.989369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:18.989378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:18.989484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:18.989592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:18.989600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:18.989607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:18.991774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:19.000689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:19.001268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.001650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.001663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:19.001672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:19.001888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:19.002087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:19.002096] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:19.002103] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:19.004368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:19.013116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:19.013643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.014001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.014013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:19.014023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:19.014129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:19.014264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:19.014272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:19.014280] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:19.016555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:19.025648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:19.025969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.026301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.026312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:19.026320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:19.026483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:19.026643] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:19.026651] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:19.026658] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:19.028874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.371 [2024-06-07 23:29:19.038313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.371 [2024-06-07 23:29:19.038887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.039256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.371 [2024-06-07 23:29:19.039269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.371 [2024-06-07 23:29:19.039278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.371 [2024-06-07 23:29:19.039457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.371 [2024-06-07 23:29:19.039621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.371 [2024-06-07 23:29:19.039629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.371 [2024-06-07 23:29:19.039636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.371 [2024-06-07 23:29:19.041804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.050695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.051185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.051535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.051545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.051553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.051677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.051818] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.051826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.051834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.053917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.063151] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.063711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.064050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.064068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.064076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.064200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.064346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.064354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.064361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.066665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.075480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.076113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.076489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.076503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.076513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.076637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.076818] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.076826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.076833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.079201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.087803] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.088343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.088717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.088729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.088738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.088881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.089026] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.089034] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.089042] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.091305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.100218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.100849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.101214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.101227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.101240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.101393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.101593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.101600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.101608] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.104026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.112644] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.113161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.113505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.113516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.113523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.113684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.113807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.113815] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.113822] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.116108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.125059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.125636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.126017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.126029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.126038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.126181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.126315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.635 [2024-06-07 23:29:19.126323] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.635 [2024-06-07 23:29:19.126331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.635 [2024-06-07 23:29:19.128579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.635 [2024-06-07 23:29:19.137524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.635 [2024-06-07 23:29:19.138075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.138446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.635 [2024-06-07 23:29:19.138460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.635 [2024-06-07 23:29:19.138469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.635 [2024-06-07 23:29:19.138653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.635 [2024-06-07 23:29:19.138798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.138806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.138814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.141089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.150055] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.150641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.151008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.151020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.151029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.151190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.151325] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.151334] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.151342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.153436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.162713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.163288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.163662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.163675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.163684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.163863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.163971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.163979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.163987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.166211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.175181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.175745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.176113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.176126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.176135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.176285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.176472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.176481] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.176488] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.178852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.187376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.188014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.188293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.188309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.188318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.188480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.188625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.188633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.188641] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.190972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.199969] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.200552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.200919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.200932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.200941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.201120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.201329] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.201337] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.201345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.203657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.212436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.213011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.213250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.213265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.213274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.213435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.213599] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.213611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.213619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.215806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.224910] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.225466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.225837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.225849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.225859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.226038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.226164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.226173] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.226180] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.228353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.237295] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.237912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.238286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.238300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.238309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.238452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.238561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.238568] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.636 [2024-06-07 23:29:19.238576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.636 [2024-06-07 23:29:19.240780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.636 [2024-06-07 23:29:19.249992] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.636 [2024-06-07 23:29:19.250599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.250961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.636 [2024-06-07 23:29:19.250973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.636 [2024-06-07 23:29:19.250982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.636 [2024-06-07 23:29:19.251107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.636 [2024-06-07 23:29:19.251296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.636 [2024-06-07 23:29:19.251305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.251317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.637 [2024-06-07 23:29:19.253610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.637 [2024-06-07 23:29:19.262173] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.637 [2024-06-07 23:29:19.262800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.263162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.263174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.637 [2024-06-07 23:29:19.263184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.637 [2024-06-07 23:29:19.263316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.637 [2024-06-07 23:29:19.263462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.637 [2024-06-07 23:29:19.263470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.263478] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.637 [2024-06-07 23:29:19.265717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.637 [2024-06-07 23:29:19.274786] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.637 [2024-06-07 23:29:19.275459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.275819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.275832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.637 [2024-06-07 23:29:19.275841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.637 [2024-06-07 23:29:19.275965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.637 [2024-06-07 23:29:19.276110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.637 [2024-06-07 23:29:19.276118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.276126] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.637 [2024-06-07 23:29:19.278317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.637 [2024-06-07 23:29:19.287246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.637 [2024-06-07 23:29:19.287830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.288194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.288207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.637 [2024-06-07 23:29:19.288217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.637 [2024-06-07 23:29:19.288386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.637 [2024-06-07 23:29:19.288549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.637 [2024-06-07 23:29:19.288558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.288565] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.637 [2024-06-07 23:29:19.290863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.637 [2024-06-07 23:29:19.299647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.637 [2024-06-07 23:29:19.300129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.300584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.300620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.637 [2024-06-07 23:29:19.300631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.637 [2024-06-07 23:29:19.300773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.637 [2024-06-07 23:29:19.300937] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.637 [2024-06-07 23:29:19.300945] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.300952] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.637 [2024-06-07 23:29:19.303194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.637 [2024-06-07 23:29:19.312175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.637 [2024-06-07 23:29:19.312724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.313063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.637 [2024-06-07 23:29:19.313073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.637 [2024-06-07 23:29:19.313081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.637 [2024-06-07 23:29:19.313241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.637 [2024-06-07 23:29:19.313391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.637 [2024-06-07 23:29:19.313399] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.637 [2024-06-07 23:29:19.313405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.315691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.324705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.325191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.325527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.325537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.325545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.325687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.325828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.325836] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.325842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.328073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.337082] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.337716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.338136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.338149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.338158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.338364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.338491] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.338500] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.338507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.340598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.349590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.350198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.350619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.350632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.350641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.350820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.350984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.350992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.350999] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.353148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.362188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.362663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.363000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.363010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.363018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.363160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.363360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.363368] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.363375] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.365586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.374716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.375198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.375545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.375555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.375562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.375686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.375828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.375835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.375842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.378143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.387121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.387548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.387885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.387894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.387901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.388043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.900 [2024-06-07 23:29:19.388148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.900 [2024-06-07 23:29:19.388156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.900 [2024-06-07 23:29:19.388162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.900 [2024-06-07 23:29:19.390381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.900 [2024-06-07 23:29:19.399521] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.900 [2024-06-07 23:29:19.400106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.400486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.900 [2024-06-07 23:29:19.400500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.900 [2024-06-07 23:29:19.400509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.900 [2024-06-07 23:29:19.400706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.400851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.400859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.400867] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.403254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.411916] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.412293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.412608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.412622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.412630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.412772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.412968] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.412976] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.412982] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.415159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.424313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.424861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.425227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.425240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.425257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.425418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.425563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.425571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.425579] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.428036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.436713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.437332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.437699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.437712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.437721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.437863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.438027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.438035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.438042] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.440380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.449519] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.450117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.450510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.450524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.450537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.450717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.450862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.450870] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.450877] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.453027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.461946] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.462574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.462938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.462951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.462961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.463067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.463194] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.463202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.463210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.465328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.474528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.475138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.475508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.475522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.475532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.475693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.475820] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.475827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.475835] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.478112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.486982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.487572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.487938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.487951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.487961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.488144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.488297] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.488305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.488313] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.490495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.499568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.500054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.500404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.500414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.901 [2024-06-07 23:29:19.500421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.901 [2024-06-07 23:29:19.500564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.901 [2024-06-07 23:29:19.500670] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.901 [2024-06-07 23:29:19.500677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.901 [2024-06-07 23:29:19.500684] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.901 [2024-06-07 23:29:19.502919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.901 [2024-06-07 23:29:19.512060] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.901 [2024-06-07 23:29:19.512542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.901 [2024-06-07 23:29:19.512756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.512769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.512777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.512920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.513080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.513088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.513095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.515462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.902 [2024-06-07 23:29:19.524576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.902 [2024-06-07 23:29:19.524953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.525295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.525305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.525313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.525473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.525618] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.525626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.525633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.527756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.902 [2024-06-07 23:29:19.537121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.902 [2024-06-07 23:29:19.537684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.538039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.538052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.538061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.538204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.538376] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.538385] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.538393] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.540539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.902 [2024-06-07 23:29:19.549564] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.902 [2024-06-07 23:29:19.550188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.550620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.550634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.550643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.550786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.550913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.550924] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.550931] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.553098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.902 [2024-06-07 23:29:19.562128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.902 [2024-06-07 23:29:19.562706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.563014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.563027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.563036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.563198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.563332] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.563346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.563354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.565518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.902 [2024-06-07 23:29:19.574492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.902 [2024-06-07 23:29:19.575024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.575420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.902 [2024-06-07 23:29:19.575430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:56.902 [2024-06-07 23:29:19.575438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:56.902 [2024-06-07 23:29:19.575562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:56.902 [2024-06-07 23:29:19.575668] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.902 [2024-06-07 23:29:19.575676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.902 [2024-06-07 23:29:19.575682] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.902 [2024-06-07 23:29:19.577841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.165 [2024-06-07 23:29:19.586870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.165 [2024-06-07 23:29:19.587441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.587780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.587789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.165 [2024-06-07 23:29:19.587796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.165 [2024-06-07 23:29:19.587920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.165 [2024-06-07 23:29:19.588024] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.165 [2024-06-07 23:29:19.588032] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.165 [2024-06-07 23:29:19.588039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.165 [2024-06-07 23:29:19.590216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.165 [2024-06-07 23:29:19.599284] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.165 [2024-06-07 23:29:19.599825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.600165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.600174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.165 [2024-06-07 23:29:19.600181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.165 [2024-06-07 23:29:19.600291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.165 [2024-06-07 23:29:19.600379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.165 [2024-06-07 23:29:19.600386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.165 [2024-06-07 23:29:19.600396] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.165 [2024-06-07 23:29:19.602682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.165 [2024-06-07 23:29:19.611675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.165 [2024-06-07 23:29:19.612159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.612471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.612481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.165 [2024-06-07 23:29:19.612488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.165 [2024-06-07 23:29:19.612593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.165 [2024-06-07 23:29:19.612735] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.165 [2024-06-07 23:29:19.612744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.165 [2024-06-07 23:29:19.612750] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.165 [2024-06-07 23:29:19.614873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.165 [2024-06-07 23:29:19.624145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.165 [2024-06-07 23:29:19.624668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.625433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.165 [2024-06-07 23:29:19.625453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.625461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.625610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.625716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.625724] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.625731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.628097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.636663] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.637073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.637462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.637472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.637479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.637639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.637799] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.637806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.637814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.640230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.649223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.649721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.650057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.650067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.650074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.650215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.650379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.650388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.650394] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.652573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.661583] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.662199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.662644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.662657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.662666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.662790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.662935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.662943] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.662951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.665285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.674206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.674746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.674883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.674892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.674899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.675023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.675111] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.675119] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.675126] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.677451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.686646] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.687182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.687544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.687554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.687562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.687758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.687899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.687907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.687914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.690109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.699061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.699603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.699970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.699983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.699992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.700135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.700305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.700313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.700321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.702543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.711524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.712062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.712489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.712499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.712507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.712613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.712755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.712763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.712771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.714912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.724106] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.724588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.724846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.724856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.724863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.725005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.725147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.166 [2024-06-07 23:29:19.725155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.166 [2024-06-07 23:29:19.725162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.166 [2024-06-07 23:29:19.727486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.166 [2024-06-07 23:29:19.736499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.166 [2024-06-07 23:29:19.737056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.737407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.166 [2024-06-07 23:29:19.737417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.166 [2024-06-07 23:29:19.737425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.166 [2024-06-07 23:29:19.737548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.166 [2024-06-07 23:29:19.737689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.737697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.737704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.739863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.748938] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.749549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.749907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.749921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.749930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.750091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.750236] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.750251] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.750259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.752547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.761838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.762296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.762652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.762666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.762674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.762834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.762940] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.762947] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.762954] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.765188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.774416] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.774828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.775163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.775173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.775180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.775325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.775468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.775475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.775482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.777839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.786973] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.787459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.787796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.787806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.787813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.787973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.788132] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.788140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.788147] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.790273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.799663] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.800057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.800416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.800426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.800442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.800602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.800725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.800733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.800740] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.802881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.812131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.812559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.812866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.812875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.812883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.813060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.813184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.813192] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.813199] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.815346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.824798] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.825371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.825740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.825752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.825761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.825941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.826086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.826094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.826102] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.828366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.167 [2024-06-07 23:29:19.837412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.167 [2024-06-07 23:29:19.837957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.838293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.167 [2024-06-07 23:29:19.838303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.167 [2024-06-07 23:29:19.838311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.167 [2024-06-07 23:29:19.838476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.167 [2024-06-07 23:29:19.838619] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.167 [2024-06-07 23:29:19.838626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.167 [2024-06-07 23:29:19.838633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.167 [2024-06-07 23:29:19.841047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.849880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.850366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.850703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.850712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.850720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.850916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.851039] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.851047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.851053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.853156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.862477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.863001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.863330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.863340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.863347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.863507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.863667] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.863675] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.863682] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.865894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.875036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.875460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.875801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.875811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.875818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.875960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.876087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.876095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.876102] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.878446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.887596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.888138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.888509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.888520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.888527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.888668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.888810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.888818] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.888826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.890825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.899904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.900482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.900853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.900865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.900875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.901072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.901198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.901206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.901214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.903546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.912426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.912906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.913254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.913265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.431 [2024-06-07 23:29:19.913273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.431 [2024-06-07 23:29:19.913415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.431 [2024-06-07 23:29:19.913539] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.431 [2024-06-07 23:29:19.913551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.431 [2024-06-07 23:29:19.913558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.431 [2024-06-07 23:29:19.916009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.431 [2024-06-07 23:29:19.924934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.431 [2024-06-07 23:29:19.925309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.925528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.431 [2024-06-07 23:29:19.925537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.925544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.925704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.925853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.925861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.925867] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.927814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.937465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:19.937948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.938284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.938294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.938301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.938443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.938602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.938610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.938617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.940832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.949921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:19.950551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.950916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.950929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.950939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.951100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.951270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.951279] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.951290] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.953672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.962386] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:19.963001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.963367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.963381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.963391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.963515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.963660] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.963668] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.963675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.965930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.974845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:19.975524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.975960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.975973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.975982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.976106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.976233] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.976241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.976258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.978390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.987157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:19.987680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.988015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:19.988025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:19.988033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:19.988211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:19.988324] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:19.988332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:19.988339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:19.990541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:19.999618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:20.000194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.000464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.000480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:20.000489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:20.000651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:20.000816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:20.000824] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:20.000831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:20.003735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:20.012219] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:20.012775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.013119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.013128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:20.013136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:20.013349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:20.013474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:20.013482] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:20.013489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:20.015693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:20.024543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:20.025037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.025405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.025416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:20.025424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.432 [2024-06-07 23:29:20.025566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.432 [2024-06-07 23:29:20.025689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.432 [2024-06-07 23:29:20.025697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.432 [2024-06-07 23:29:20.025703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.432 [2024-06-07 23:29:20.027993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.432 [2024-06-07 23:29:20.036915] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.432 [2024-06-07 23:29:20.037267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.037512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.432 [2024-06-07 23:29:20.037524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.432 [2024-06-07 23:29:20.037531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.037656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.037780] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.037787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.037794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.040315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.433 [2024-06-07 23:29:20.049370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.433 [2024-06-07 23:29:20.049895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.050248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.050258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.433 [2024-06-07 23:29:20.050266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.050389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.050531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.050539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.050545] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.052780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.433 [2024-06-07 23:29:20.061946] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.433 [2024-06-07 23:29:20.062338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.062694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.062704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.433 [2024-06-07 23:29:20.062711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.062835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.063031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.063039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.063046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.065159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.433 [2024-06-07 23:29:20.074438] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.433 [2024-06-07 23:29:20.074933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.075286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.075297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.433 [2024-06-07 23:29:20.075305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.075428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.075552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.075559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.075566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.078034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.433 [2024-06-07 23:29:20.086947] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.433 [2024-06-07 23:29:20.087573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.087942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.087954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.433 [2024-06-07 23:29:20.087964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.088108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.088235] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.088248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.088256] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.090568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.433 [2024-06-07 23:29:20.099426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.433 [2024-06-07 23:29:20.100032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.100407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.433 [2024-06-07 23:29:20.100422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.433 [2024-06-07 23:29:20.100431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.433 [2024-06-07 23:29:20.100574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.433 [2024-06-07 23:29:20.100719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.433 [2024-06-07 23:29:20.100727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.433 [2024-06-07 23:29:20.100735] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.433 [2024-06-07 23:29:20.103195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.111906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.112549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.112889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.112902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.112911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.113054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.113261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.113271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.113278] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.115557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.124331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.124946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.125333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.125346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.125357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.125500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.125645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.125653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.125661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.128010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.136739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.137215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.137576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.137586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.137594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.137718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.137841] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.137849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.137855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.140195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.149167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.149771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.150136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.150149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.150162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.150347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.150511] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.150520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.150527] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.152675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.161729] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.162215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.162571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.162581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.162589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.162713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.162855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.162862] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.162869] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.165321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.174057] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.174573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.174868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.174877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.696 [2024-06-07 23:29:20.174885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.696 [2024-06-07 23:29:20.175009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.696 [2024-06-07 23:29:20.175132] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.696 [2024-06-07 23:29:20.175139] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.696 [2024-06-07 23:29:20.175146] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.696 [2024-06-07 23:29:20.177361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.696 [2024-06-07 23:29:20.186589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.696 [2024-06-07 23:29:20.187119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.696 [2024-06-07 23:29:20.187504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.187518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.187527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.187693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.187784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.187792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.187801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.190279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.199117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.199628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.199939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.199950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.199958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.200082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.200247] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.200256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.200263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.202314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.211481] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.211945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.212187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.212196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.212203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.212369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.212493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.212501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.212508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.214739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.224000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.224577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.224962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.224974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.224983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.225089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.225271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.225281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.225288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.227525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.236408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.236844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.237225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.237234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.237246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.237407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.237513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.237520] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.237527] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.239795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.248961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.249568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.249939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.249951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.249960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.250139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.250293] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.250302] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.250310] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.252658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.261364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.261969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.262331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.262344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.262353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.262514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.262659] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.262671] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.262679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.264902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.273749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.274287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.274546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.274556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.274564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.274710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.274852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.274860] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.274867] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.277196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.286175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.286807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.287172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.287184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.697 [2024-06-07 23:29:20.287194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.697 [2024-06-07 23:29:20.287326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.697 [2024-06-07 23:29:20.287453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.697 [2024-06-07 23:29:20.287462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.697 [2024-06-07 23:29:20.287469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.697 [2024-06-07 23:29:20.289615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.697 [2024-06-07 23:29:20.298659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.697 [2024-06-07 23:29:20.299282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.299706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.697 [2024-06-07 23:29:20.299718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.299727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.299925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.300070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.300078] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.300090] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.302301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.310948] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.311529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.311946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.311959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.311968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.312129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.312281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.312290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.312298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.314608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.323356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.324021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.324388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.324403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.324412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.324536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.324663] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.324671] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.324678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.327127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.335781] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.336391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.336760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.336772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.336782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.336906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.337014] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.337022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.337030] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.339352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.348360] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.348924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.349250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.349263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.349273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.349434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.349597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.349605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.349612] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.351778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.360946] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.361527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.361898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.361911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.361921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.362063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.362190] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.362198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.362206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.698 [2024-06-07 23:29:20.364379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.698 [2024-06-07 23:29:20.373499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.698 [2024-06-07 23:29:20.373984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.374271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.698 [2024-06-07 23:29:20.374283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.698 [2024-06-07 23:29:20.374291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.698 [2024-06-07 23:29:20.374434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.698 [2024-06-07 23:29:20.374576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.698 [2024-06-07 23:29:20.374583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.698 [2024-06-07 23:29:20.374590] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.376893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.385966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.386380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.386719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.386728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.386735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.386859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.386983] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.386991] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.386997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.389377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.398290] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.398878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.399250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.399264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.399273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.399416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.399560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.399568] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.399576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.401959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.410940] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.411560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.411932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.411945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.411954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.412079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.412224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.412232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.412239] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.414341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.423319] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.423964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.424420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.424434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.424443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.424549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.424676] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.424684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.424691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.426998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.435835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.436476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.436848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.436861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.436870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.437085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.437257] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.437268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.437275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.439586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.448097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.448676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.449046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.449059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.449068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.449155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.449327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.449336] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.449344] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.451599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.460525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.461015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.461412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.461426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.461436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.461578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.461760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.461768] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.962 [2024-06-07 23:29:20.461775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.962 [2024-06-07 23:29:20.464179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.962 [2024-06-07 23:29:20.473046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.962 [2024-06-07 23:29:20.473620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.473992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.962 [2024-06-07 23:29:20.474004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.962 [2024-06-07 23:29:20.474013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.962 [2024-06-07 23:29:20.474156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.962 [2024-06-07 23:29:20.474273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.962 [2024-06-07 23:29:20.474281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.474289] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.476562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.485731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.486313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.486683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.486696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.486705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.486848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.486975] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.486983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.486990] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.489296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.498265] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.498840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.499210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.499223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.499236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.499388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.499516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.499524] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.499531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.501752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.510688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.511281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.511676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.511689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.511698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.511877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.511986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.511994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.512001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.514227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.523011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.523601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.523969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.523982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.523991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.524116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.524271] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.524280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.524287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.526570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.535589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.536170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.536520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.536534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.536544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.536691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.536818] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.536826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.536834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.539073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.548102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.548691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.549060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.549074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.549083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.549226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.549343] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.549352] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.549360] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.551579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.560635] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.561128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.561426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.561437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.561444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.561622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.561765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.561772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.561779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.564045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.573263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.573708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.574079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.574091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.574100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.574288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.574438] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.574447] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.574454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.576745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.585745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.586271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.586703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.963 [2024-06-07 23:29:20.586716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.963 [2024-06-07 23:29:20.586725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.963 [2024-06-07 23:29:20.586868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.963 [2024-06-07 23:29:20.587013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.963 [2024-06-07 23:29:20.587022] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.963 [2024-06-07 23:29:20.587029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.963 [2024-06-07 23:29:20.589290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.963 [2024-06-07 23:29:20.598453] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.963 [2024-06-07 23:29:20.599031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.599402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.599416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.964 [2024-06-07 23:29:20.599425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.964 [2024-06-07 23:29:20.599587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.964 [2024-06-07 23:29:20.599714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.964 [2024-06-07 23:29:20.599722] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.964 [2024-06-07 23:29:20.599729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.964 [2024-06-07 23:29:20.602023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.964 [2024-06-07 23:29:20.610947] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.964 [2024-06-07 23:29:20.611536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.611908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.611921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.964 [2024-06-07 23:29:20.611930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.964 [2024-06-07 23:29:20.612109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.964 [2024-06-07 23:29:20.612265] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.964 [2024-06-07 23:29:20.612274] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.964 [2024-06-07 23:29:20.612282] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.964 [2024-06-07 23:29:20.614519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.964 [2024-06-07 23:29:20.623611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.964 [2024-06-07 23:29:20.624205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.624661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.624674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.964 [2024-06-07 23:29:20.624683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.964 [2024-06-07 23:29:20.624863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.964 [2024-06-07 23:29:20.625044] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.964 [2024-06-07 23:29:20.625052] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.964 [2024-06-07 23:29:20.625059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.964 [2024-06-07 23:29:20.627672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:57.964 [2024-06-07 23:29:20.636188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.964 [2024-06-07 23:29:20.636815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.637185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:57.964 [2024-06-07 23:29:20.637198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:57.964 [2024-06-07 23:29:20.637207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:57.964 [2024-06-07 23:29:20.637395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:57.964 [2024-06-07 23:29:20.637541] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:57.964 [2024-06-07 23:29:20.637549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:57.964 [2024-06-07 23:29:20.637557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.964 [2024-06-07 23:29:20.639813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.226 [2024-06-07 23:29:20.648561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.226 [2024-06-07 23:29:20.649098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.226 [2024-06-07 23:29:20.649456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.226 [2024-06-07 23:29:20.649470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.226 [2024-06-07 23:29:20.649480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.649622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.649712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.649721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.649732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.651917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.661169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.661758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.662179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.662191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.662201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.662352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.662497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.662505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.662513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.664825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.673783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.674377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.674760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.674773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.674783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.674889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.674979] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.674987] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.674994] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.677274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.686436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.686928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.687274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.687285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.687293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.687454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.687632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.687639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.687651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.689922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.698820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.699395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.699783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.699795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.699804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.699965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.700110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.700118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.700126] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.702443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.711154] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.711757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.712027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.712041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.712050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.712194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.712310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.712320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.712328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.714620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.723697] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.724269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.724700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.724713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.724722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.724865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.724992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.725000] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.725007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.727390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.736206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.736716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.737019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.737030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.737037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.737163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.737312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.737322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.737329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.739781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.748637] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.749084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.749429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.749439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.749446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.749552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.749729] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.227 [2024-06-07 23:29:20.749737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.227 [2024-06-07 23:29:20.749743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.227 [2024-06-07 23:29:20.751885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.227 [2024-06-07 23:29:20.760916] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.227 [2024-06-07 23:29:20.761407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.761749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.227 [2024-06-07 23:29:20.761758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.227 [2024-06-07 23:29:20.761766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.227 [2024-06-07 23:29:20.761926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.227 [2024-06-07 23:29:20.762103] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.762111] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.762118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.764333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.773304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.773785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.774137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.774146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.774153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.774300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.774478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.774486] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.774493] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.776832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.785647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.786139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.786495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.786505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.786512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.786617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.786759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.786767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.786773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.788930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.798004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.798523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.798853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.798862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.798869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.799029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.799153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.799160] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.799167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.801219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.810410] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.810999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.811290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.811305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.811315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.811475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.811622] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.811629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.811637] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.813930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.823000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.823580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.823951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.823963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.823972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.824115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.824268] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.824277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.824284] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.826493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.835647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.836232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.836625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.836638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.836647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.836808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.836935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.836943] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.836951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.839357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.848015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.848622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.848989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.849001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.849015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.849140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.849256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.849264] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.849271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.851473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.860437] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.861010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.861385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.861399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.861408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.861551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.861696] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.861704] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.861712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.864060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 [2024-06-07 23:29:20.872838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.228 [2024-06-07 23:29:20.873339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.873681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.228 [2024-06-07 23:29:20.873691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.228 [2024-06-07 23:29:20.873698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.228 [2024-06-07 23:29:20.873876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.228 [2024-06-07 23:29:20.874000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.228 [2024-06-07 23:29:20.874007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.228 [2024-06-07 23:29:20.874014] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.228 [2024-06-07 23:29:20.876288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3049331 Killed "${NVMF_APP[@]}" "$@" 00:32:58.228 23:29:20 -- host/bdevperf.sh@36 -- # tgt_init 00:32:58.228 23:29:20 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:58.229 23:29:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:58.229 23:29:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:58.229 23:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 [2024-06-07 23:29:20.885415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.229 [2024-06-07 23:29:20.885908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.229 [2024-06-07 23:29:20.886275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.229 [2024-06-07 23:29:20.886289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.229 [2024-06-07 23:29:20.886298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.229 [2024-06-07 23:29:20.886496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.229 [2024-06-07 23:29:20.886605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.229 [2024-06-07 23:29:20.886612] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.229 [2024-06-07 23:29:20.886620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.229 [2024-06-07 23:29:20.888733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.229 23:29:20 -- nvmf/common.sh@469 -- # nvmfpid=3051017 00:32:58.229 23:29:20 -- nvmf/common.sh@470 -- # waitforlisten 3051017 00:32:58.229 23:29:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:58.229 23:29:20 -- common/autotest_common.sh@819 -- # '[' -z 3051017 ']' 00:32:58.229 23:29:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.229 23:29:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:58.229 23:29:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.229 23:29:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:58.229 23:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 [2024-06-07 23:29:20.897991] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.229 [2024-06-07 23:29:20.898493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.229 [2024-06-07 23:29:20.898873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.229 [2024-06-07 23:29:20.898883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.229 [2024-06-07 23:29:20.898891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.229 [2024-06-07 23:29:20.899051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.229 [2024-06-07 23:29:20.899212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.229 [2024-06-07 23:29:20.899220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.229 [2024-06-07 23:29:20.899227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.229 [2024-06-07 23:29:20.901524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.491 [2024-06-07 23:29:20.910472] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.491 [2024-06-07 23:29:20.910958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.911187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.911197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.491 [2024-06-07 23:29:20.911204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.491 [2024-06-07 23:29:20.911333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.491 [2024-06-07 23:29:20.911462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.491 [2024-06-07 23:29:20.911470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.491 [2024-06-07 23:29:20.911477] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.491 [2024-06-07 23:29:20.913817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.491 [2024-06-07 23:29:20.922907] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.491 [2024-06-07 23:29:20.923379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.923812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.923822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.491 [2024-06-07 23:29:20.923829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.491 [2024-06-07 23:29:20.923952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.491 [2024-06-07 23:29:20.924113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.491 [2024-06-07 23:29:20.924121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.491 [2024-06-07 23:29:20.924127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.491 [2024-06-07 23:29:20.926247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.491 [2024-06-07 23:29:20.935501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.491 [2024-06-07 23:29:20.936011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.936341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.491 [2024-06-07 23:29:20.936351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.491 [2024-06-07 23:29:20.936359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.491 [2024-06-07 23:29:20.936519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.491 [2024-06-07 23:29:20.936642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.491 [2024-06-07 23:29:20.936649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.491 [2024-06-07 23:29:20.936656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.491 [2024-06-07 23:29:20.938726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.491 [2024-06-07 23:29:20.939306] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:58.492 [2024-06-07 23:29:20.939350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.492 [2024-06-07 23:29:20.948113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:20.948632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.948874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.948891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:20.948905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:20.949047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:20.949174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:20.949181] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:20.949188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:20.951391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:20.960626] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:20.961167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.961552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.961567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:20.961577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:20.961739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:20.961884] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:20.961892] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:20.961899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:20.964085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.492 [2024-06-07 23:29:20.973068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:20.973481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.973834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.973845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:20.973852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:20.973995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:20.974156] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:20.974163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:20.974171] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:20.976281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:20.985316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:20.985914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.986297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.986310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:20.986320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:20.986485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:20.986630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:20.986639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:20.986647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:20.988904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:20.997735] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:20.998255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.998608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:20.998618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:20.998625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:20.998750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:20.998910] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:20.998918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:20.998926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:21.001286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:21.010412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:21.011005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.011397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.011411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:21.011421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:21.011564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:21.011728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:21.011737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:21.011745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:21.014022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:21.020919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:58.492 [2024-06-07 23:29:21.022966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:21.023505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.023850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.023861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:21.023868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:21.024016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:21.024178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:21.024186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:21.024192] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:21.026386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:21.035378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:21.035944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.036318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.036332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:21.036341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.492 [2024-06-07 23:29:21.036580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.492 [2024-06-07 23:29:21.036707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.492 [2024-06-07 23:29:21.036715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.492 [2024-06-07 23:29:21.036723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.492 [2024-06-07 23:29:21.038981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.492 [2024-06-07 23:29:21.047755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.492 [2024-06-07 23:29:21.047784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:58.492 [2024-06-07 23:29:21.047869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.492 [2024-06-07 23:29:21.047874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.492 [2024-06-07 23:29:21.047879] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.492 [2024-06-07 23:29:21.047995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.492 [2024-06-07 23:29:21.048144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.492 [2024-06-07 23:29:21.048146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.492 [2024-06-07 23:29:21.048291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.048429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.492 [2024-06-07 23:29:21.048442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.492 [2024-06-07 23:29:21.048450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.048564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.048652] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.048660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.048667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.050914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.060171] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.060803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.061186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.061198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.061208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.061377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.061524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.061532] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.061540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.063831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.072645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.073130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.073513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.073524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.073532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.073638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.073762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.073770] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.073777] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.076098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.085116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.085774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.086174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.086186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.086196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.086423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.086533] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.086541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.086549] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.088842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.097599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.098195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.098612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.098626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.098635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.098815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.098961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.098969] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.098977] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.101218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.110130] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.110789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.111017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.111030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.111039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.111201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.111371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.111380] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.111388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.113553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.122505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.123108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.123405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.123420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.123430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.123610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.123719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.123728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.123735] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.126043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.134884] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.135592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.135818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.135834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.135844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.135931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.136077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.136087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.136094] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.138101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.147121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.147754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.148099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.148112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.148121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.148272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.148381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.148389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.148396] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.493 [2024-06-07 23:29:21.150634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.493 [2024-06-07 23:29:21.159369] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.493 [2024-06-07 23:29:21.159880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.160145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.493 [2024-06-07 23:29:21.160158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.493 [2024-06-07 23:29:21.160168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.493 [2024-06-07 23:29:21.160336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.493 [2024-06-07 23:29:21.160451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.493 [2024-06-07 23:29:21.160460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.493 [2024-06-07 23:29:21.160467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.494 [2024-06-07 23:29:21.162739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.171866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.172356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.172556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.172565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.172578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.172721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.172863] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.172879] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.172886] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.757 [2024-06-07 23:29:21.175100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.184401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.185062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.185338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.185352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.185361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.185504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.185631] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.185639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.185646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.757 [2024-06-07 23:29:21.187902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.196767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.197214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.197316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.197326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.197334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.197477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.197601] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.197608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.197615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.757 [2024-06-07 23:29:21.199917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.209363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.209709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.210035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.210052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.210060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.210152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.210319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.210328] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.210335] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.757 [2024-06-07 23:29:21.212601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.221767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.222344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.222721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.222734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.222743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.222941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.223068] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.223076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.223083] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.757 [2024-06-07 23:29:21.225354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.757 [2024-06-07 23:29:21.233964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.757 [2024-06-07 23:29:21.234507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.234859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.757 [2024-06-07 23:29:21.234868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.757 [2024-06-07 23:29:21.234876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.757 [2024-06-07 23:29:21.234999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.757 [2024-06-07 23:29:21.235141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.757 [2024-06-07 23:29:21.235149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.757 [2024-06-07 23:29:21.235156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.237393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.246556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.246929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.247323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.247338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.247347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.247526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.247712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.247721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.247728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.250060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.259129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.259776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.260156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.260169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.260178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.260401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.260566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.260574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.260582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.262819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.271729] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.272211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.272546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.272560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.272570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.272750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.272932] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.272941] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.272948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.275299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.284187] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.284781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.285023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.285035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.285044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.285205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.285340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.285354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.285361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.287545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.296658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.297287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.297683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.297697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.297706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.297904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.298067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.298075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.298082] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.300545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.309128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.309651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.309998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.310007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.310015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.310157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.310323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.310331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.310338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.312555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.321647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.322103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.322476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.322487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.322494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.322618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.322742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.322750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.322762] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.325029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.334233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.334890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.335232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.335252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.335262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.335406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.335551] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.335559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.335567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.337952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.346840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.347199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.347622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.347636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.758 [2024-06-07 23:29:21.347645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.758 [2024-06-07 23:29:21.347824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.758 [2024-06-07 23:29:21.347969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.758 [2024-06-07 23:29:21.347977] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.758 [2024-06-07 23:29:21.347984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.758 [2024-06-07 23:29:21.349989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.758 [2024-06-07 23:29:21.359364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.758 [2024-06-07 23:29:21.359824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.758 [2024-06-07 23:29:21.360174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.360183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.360191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.360301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.360462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.360470] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.360477] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.362786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.371957] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.372585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.372974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.372986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.372995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.373083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.373191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.373200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.373207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.375543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.384327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.384920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.385449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.385486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.385497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.385658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.385785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.385793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.385800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.388224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.396737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.397235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.397459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.397469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.397477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.397583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.397689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.397697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.397703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.400135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.409216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.409730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.409928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.409937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.409945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.410050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.410210] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.410218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.410225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.412404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.421853] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.422359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.422753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.422762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.422769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.422911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.423053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.423060] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.423067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:58.759 [2024-06-07 23:29:21.425291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:58.759 [2024-06-07 23:29:21.434473] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:58.759 [2024-06-07 23:29:21.434903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.435163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:58.759 [2024-06-07 23:29:21.435173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:58.759 [2024-06-07 23:29:21.435180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:58.759 [2024-06-07 23:29:21.435326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:58.759 [2024-06-07 23:29:21.435450] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:58.759 [2024-06-07 23:29:21.435458] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:58.759 [2024-06-07 23:29:21.435465] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.021 [2024-06-07 23:29:21.437639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.446868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.447534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.447750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.447763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.447772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.447951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.448115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.448123] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.448130] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.450573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.459168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.459546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.459779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.459789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.459796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.459956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.460099] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.460106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.460113] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.462532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.471624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.472103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.472505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.472521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.472530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.472710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.472874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.472882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.472889] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.475185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.484013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.484523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.484905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.484922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.484931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.485092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.485220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.485227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.485235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.487553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.496358] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.496861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.497251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.497265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.497274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.497454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.497599] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.497608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.497615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.499836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.508813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.509328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.509700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.509710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.509718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.509841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.509948] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.509955] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.509962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.512140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.521337] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.521867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.522236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.522250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.522265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.522425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.522566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.522575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.522581] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.524777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.533557] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.534147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.534534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.534549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.534558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.534701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.534883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.534892] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.534899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.537325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.546270] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.546876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.547275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.547289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.022 [2024-06-07 23:29:21.547299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.022 [2024-06-07 23:29:21.547460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.022 [2024-06-07 23:29:21.547587] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.022 [2024-06-07 23:29:21.547595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.022 [2024-06-07 23:29:21.547602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.022 [2024-06-07 23:29:21.549772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.022 [2024-06-07 23:29:21.558563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.022 [2024-06-07 23:29:21.559075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.559431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.022 [2024-06-07 23:29:21.559442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.559449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.559560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.559683] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.559691] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.559698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.561876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.570991] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.571627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.571993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.572006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.572015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.572158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.572310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.572320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.572327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.574383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.583398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.583907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.584268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.584279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.584287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.584447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.584571] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.584579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.584585] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.586765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.595966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.596603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.597045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.597058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.597068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.597291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.597441] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.597450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.597457] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.599912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.608528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.609036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.609387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.609398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.609405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.609584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.609726] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.609734] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.609741] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.612067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.620994] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.621402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.621723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.621732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.621739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.621881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.621969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.621976] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.621984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.624006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.633783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.634250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.634583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.634593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.634600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.634760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.634938] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.634951] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.634958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.636953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.646101] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.646485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.646837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.646848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.646855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.646979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.647121] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.647129] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.647137] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.649318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.658340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.658840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.659049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.659061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.659068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.659209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.659340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.659348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.023 [2024-06-07 23:29:21.659355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.023 [2024-06-07 23:29:21.661496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.023 [2024-06-07 23:29:21.670832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.023 [2024-06-07 23:29:21.671331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.671566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.023 [2024-06-07 23:29:21.671575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.023 [2024-06-07 23:29:21.671582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.023 [2024-06-07 23:29:21.671724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.023 [2024-06-07 23:29:21.671902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.023 [2024-06-07 23:29:21.671910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.024 [2024-06-07 23:29:21.671920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.024 [2024-06-07 23:29:21.674186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.024 [2024-06-07 23:29:21.683449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.024 [2024-06-07 23:29:21.683926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.024 [2024-06-07 23:29:21.684258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.024 [2024-06-07 23:29:21.684268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.024 [2024-06-07 23:29:21.684275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.024 [2024-06-07 23:29:21.684381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.024 [2024-06-07 23:29:21.684486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.024 [2024-06-07 23:29:21.684494] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.024 [2024-06-07 23:29:21.684500] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.024 [2024-06-07 23:29:21.686622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.024 [2024-06-07 23:29:21.695987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.024 [2024-06-07 23:29:21.696504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.024 [2024-06-07 23:29:21.696891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.024 [2024-06-07 23:29:21.696904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.024 [2024-06-07 23:29:21.696913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.024 [2024-06-07 23:29:21.697037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.024 [2024-06-07 23:29:21.697183] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.024 [2024-06-07 23:29:21.697191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.024 [2024-06-07 23:29:21.697198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.024 [2024-06-07 23:29:21.699479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.285 23:29:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:59.285 23:29:21 -- common/autotest_common.sh@852 -- # return 0 00:32:59.285 23:29:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:59.285 23:29:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:59.285 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.285 [2024-06-07 23:29:21.708466] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.285 [2024-06-07 23:29:21.708970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.285 [2024-06-07 23:29:21.709197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.285 [2024-06-07 23:29:21.709206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.285 [2024-06-07 23:29:21.709214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.285 [2024-06-07 23:29:21.709379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.285 [2024-06-07 23:29:21.709485] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.285 [2024-06-07 23:29:21.709499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.285 [2024-06-07 23:29:21.709506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.285 [2024-06-07 23:29:21.711812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.285 [2024-06-07 23:29:21.720934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.285 [2024-06-07 23:29:21.721354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.285 [2024-06-07 23:29:21.721687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.285 [2024-06-07 23:29:21.721697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.285 [2024-06-07 23:29:21.721704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.721828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.721970] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.721978] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.721984] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.724143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 [2024-06-07 23:29:21.733533] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.734124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.734334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.734349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.734358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.734520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.734665] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.734673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.734680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.736976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 [2024-06-07 23:29:21.746058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.746569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.746924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.746934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.746941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.747120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.747284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.747292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.747304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 23:29:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.286 23:29:21 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:59.286 23:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.286 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.286 [2024-06-07 23:29:21.749502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 [2024-06-07 23:29:21.753739] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.286 [2024-06-07 23:29:21.758588] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 23:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.286 [2024-06-07 23:29:21.759043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 23:29:21 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:59.286 [2024-06-07 23:29:21.759408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.759418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.759425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 23:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.286 [2024-06-07 23:29:21.759586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.759710] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.759718] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.759724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.286 [2024-06-07 23:29:21.761848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 [2024-06-07 23:29:21.771157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.771478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.771800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.771817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.771824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.771949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.772108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.772115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.772122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.774483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 [2024-06-07 23:29:21.783669] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.784181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.784576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.784587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.784598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.784776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.784936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.784944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.784950] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.787054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 Malloc0 00:32:59.286 23:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.286 23:29:21 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.286 23:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.286 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.286 [2024-06-07 23:29:21.796069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.796500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.796849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.796858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.796866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.797026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.797185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.797193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.797200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.799291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 23:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.286 23:29:21 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:59.286 23:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.286 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.286 [2024-06-07 23:29:21.808679] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.809287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.809574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.809588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.809597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.286 [2024-06-07 23:29:21.809778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.286 [2024-06-07 23:29:21.809941] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.286 [2024-06-07 23:29:21.809950] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.286 [2024-06-07 23:29:21.809958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.286 [2024-06-07 23:29:21.812458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.286 23:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.286 23:29:21 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.286 23:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:59.286 23:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:59.286 [2024-06-07 23:29:21.821248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.286 [2024-06-07 23:29:21.821830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.822236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:59.286 [2024-06-07 23:29:21.822256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b374a0 with addr=10.0.0.2, port=4420 00:32:59.286 [2024-06-07 23:29:21.822266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b374a0 is same with the state(5) to be set 00:32:59.287 [2024-06-07 23:29:21.822427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b374a0 (9): Bad file descriptor 00:32:59.287 [2024-06-07 23:29:21.822554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:59.287 [2024-06-07 23:29:21.822563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:59.287 [2024-06-07 23:29:21.822570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:59.287 [2024-06-07 23:29:21.824843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.287 [2024-06-07 23:29:21.825136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:59.287 23:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:59.287 23:29:21 -- host/bdevperf.sh@38 -- # wait 3049971 00:32:59.287 [2024-06-07 23:29:21.833816] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:59.287 [2024-06-07 23:29:21.867941] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:09.288 00:33:09.288 Latency(us) 00:33:09.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:09.288 Verification LBA range: start 0x0 length 0x4000 00:33:09.288 Nvme1n1 : 15.00 14364.09 56.11 14942.34 0.00 4352.96 730.45 13598.72 00:33:09.288 =================================================================================================================== 00:33:09.288 Total : 14364.09 56.11 14942.34 0.00 4352.96 730.45 13598.72 00:33:09.288 23:29:30 -- host/bdevperf.sh@39 -- # sync 00:33:09.288 23:29:30 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.288 23:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.288 23:29:30 -- common/autotest_common.sh@10 -- # set +x 00:33:09.288 23:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.288 23:29:30 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:09.288 23:29:30 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:09.288 23:29:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:09.288 23:29:30 -- nvmf/common.sh@116 -- # sync 00:33:09.288 23:29:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:09.288 23:29:30 -- nvmf/common.sh@119 -- # set +e 00:33:09.288 23:29:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:09.288 23:29:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:09.288 rmmod nvme_tcp 00:33:09.288 rmmod nvme_fabrics 00:33:09.288 rmmod nvme_keyring 00:33:09.288 23:29:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:09.288 23:29:30 -- nvmf/common.sh@123 -- # set -e 00:33:09.288 23:29:30 -- nvmf/common.sh@124 -- # return 0 00:33:09.288 23:29:30 -- nvmf/common.sh@477 -- # '[' -n 3051017 ']' 00:33:09.288 23:29:30 -- nvmf/common.sh@478 -- # killprocess 3051017 00:33:09.288 23:29:30 -- common/autotest_common.sh@926 -- # '[' -z 3051017 ']' 00:33:09.288 23:29:30 -- common/autotest_common.sh@930 -- # kill -0 3051017 00:33:09.288 23:29:30 -- common/autotest_common.sh@931 -- # uname 00:33:09.288 23:29:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:09.288 23:29:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3051017 00:33:09.288 23:29:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:09.288 23:29:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:09.288 23:29:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3051017' 00:33:09.288 killing process with pid 3051017 00:33:09.288 23:29:30 -- common/autotest_common.sh@945 -- # kill 3051017 00:33:09.288 23:29:30 -- common/autotest_common.sh@950 -- # wait 3051017 00:33:09.288 23:29:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:09.288 23:29:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:09.288 23:29:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:09.288 23:29:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:09.288 23:29:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:09.288 23:29:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.288 23:29:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.288 23:29:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.233 23:29:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:10.233 00:33:10.233 real 0m27.570s 00:33:10.233 user 1m2.471s 00:33:10.233 sys 0m6.978s 00:33:10.233 23:29:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.233 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:33:10.233 ************************************ 00:33:10.233 END TEST nvmf_bdevperf 00:33:10.233 ************************************ 00:33:10.233 23:29:32 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:10.233 23:29:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:10.233 23:29:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:10.233 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:33:10.233 ************************************ 00:33:10.233 START TEST nvmf_target_disconnect 00:33:10.233 ************************************ 00:33:10.233 23:29:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:10.233 * Looking for test storage... 00:33:10.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:10.493 23:29:32 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.493 23:29:32 -- nvmf/common.sh@7 -- # uname -s 00:33:10.493 23:29:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.493 23:29:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.493 23:29:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.493 23:29:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.493 23:29:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.493 23:29:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.493 23:29:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.493 23:29:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.493 23:29:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.493 23:29:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.493 23:29:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:10.493 23:29:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:10.493 23:29:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.493 23:29:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.493 23:29:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.493 23:29:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.493 23:29:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.493 23:29:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.493 23:29:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.493 23:29:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.493 23:29:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.494 23:29:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.494 23:29:32 -- paths/export.sh@5 -- # export PATH 00:33:10.494 23:29:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.494 23:29:32 -- nvmf/common.sh@46 -- # : 0 00:33:10.494 23:29:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:10.494 23:29:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:10.494 23:29:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:10.494 23:29:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.494 23:29:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.494 23:29:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:10.494 23:29:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:10.494 23:29:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:10.494 23:29:32 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:10.494 23:29:32 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:10.494 23:29:32 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:10.494 23:29:32 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:10.494 23:29:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:10.494 23:29:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.494 23:29:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:10.494 23:29:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:10.494 23:29:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:10.494 23:29:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.494 23:29:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:10.494 23:29:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.494 23:29:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:10.494 23:29:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:10.494 23:29:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:10.494 23:29:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.631 23:29:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:18.631 23:29:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:18.631 23:29:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:18.631 23:29:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:18.631 23:29:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:18.631 23:29:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:18.631 23:29:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:18.631 23:29:39 -- nvmf/common.sh@294 -- # net_devs=() 00:33:18.631 23:29:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:18.631 23:29:39 -- nvmf/common.sh@295 -- # e810=() 00:33:18.631 23:29:39 -- nvmf/common.sh@295 -- # local -ga e810 00:33:18.631 23:29:39 -- nvmf/common.sh@296 -- # x722=() 00:33:18.631 23:29:39 -- nvmf/common.sh@296 -- # local -ga x722 00:33:18.631 23:29:39 -- nvmf/common.sh@297 -- # mlx=() 00:33:18.631 23:29:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:18.631 23:29:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.631 23:29:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:18.631 23:29:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:18.631 23:29:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:18.631 23:29:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:18.631 23:29:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:18.631 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:18.631 23:29:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:18.631 23:29:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:18.631 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:18.631 23:29:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:18.631 23:29:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:18.631 23:29:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:18.632 23:29:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:18.632 23:29:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.632 23:29:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:18.632 23:29:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.632 23:29:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:18.632 Found net devices under 0000:31:00.0: cvl_0_0 00:33:18.632 23:29:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.632 23:29:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:18.632 23:29:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.632 23:29:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:18.632 23:29:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.632 23:29:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:18.632 Found net devices under 0000:31:00.1: cvl_0_1 00:33:18.632 23:29:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.632 23:29:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:18.632 23:29:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:18.632 23:29:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:18.632 23:29:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:18.632 23:29:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:18.632 23:29:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.632 23:29:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.632 23:29:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.632 23:29:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:18.632 23:29:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.632 23:29:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.632 23:29:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:18.632 23:29:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.632 23:29:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.632 23:29:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:18.632 23:29:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:18.632 23:29:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.632 23:29:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.632 23:29:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.632 23:29:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.632 23:29:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:18.632 23:29:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.632 23:29:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.632 23:29:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.632 23:29:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:18.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:18.632 00:33:18.632 --- 10.0.0.2 ping statistics --- 00:33:18.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.632 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:18.632 23:29:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:33:18.632 00:33:18.632 --- 10.0.0.1 ping statistics --- 00:33:18.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.632 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:33:18.632 23:29:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.632 23:29:40 -- nvmf/common.sh@410 -- # return 0 00:33:18.632 23:29:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:18.632 23:29:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.632 23:29:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:18.632 23:29:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:18.632 23:29:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.632 23:29:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:18.632 23:29:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:18.632 23:29:40 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:18.632 23:29:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:18.632 23:29:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:18.632 23:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:18.632 ************************************ 00:33:18.632 START TEST nvmf_target_disconnect_tc1 00:33:18.632 ************************************ 00:33:18.632 23:29:40 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:18.632 23:29:40 -- host/target_disconnect.sh@32 -- # set +e 00:33:18.632 23:29:40 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.632 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.632 [2024-06-07 23:29:40.263957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.632 [2024-06-07 23:29:40.264302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.632 [2024-06-07 23:29:40.264318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x782d10 with addr=10.0.0.2, port=4420 00:33:18.632 [2024-06-07 23:29:40.264343] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:18.632 [2024-06-07 23:29:40.264353] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:18.632 [2024-06-07 23:29:40.264361] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:18.632 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:18.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:18.632 Initializing NVMe Controllers 00:33:18.632 23:29:40 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:18.632 23:29:40 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:18.632 23:29:40 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:18.632 23:29:40 -- common/autotest_common.sh@1132 -- # return 0 00:33:18.632 23:29:40 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:18.632 23:29:40 -- host/target_disconnect.sh@41 -- # set -e 00:33:18.632 00:33:18.632 real 0m0.096s 00:33:18.632 user 0m0.031s 00:33:18.632 sys 0m0.064s 00:33:18.632 23:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:18.632 23:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:18.632 ************************************ 00:33:18.632 END TEST nvmf_target_disconnect_tc1 00:33:18.632 ************************************ 00:33:18.632 23:29:40 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:18.632 23:29:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:18.632 23:29:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:18.632 23:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:18.632 ************************************ 00:33:18.632 START TEST nvmf_target_disconnect_tc2 00:33:18.632 ************************************ 00:33:18.632 23:29:40 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:18.632 23:29:40 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:18.632 23:29:40 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:18.632 23:29:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:18.632 23:29:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:18.632 23:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:18.632 23:29:40 -- nvmf/common.sh@469 -- # nvmfpid=3057152 00:33:18.632 23:29:40 -- nvmf/common.sh@470 -- # waitforlisten 3057152 00:33:18.632 23:29:40 -- common/autotest_common.sh@819 -- # '[' -z 3057152 ']' 00:33:18.632 23:29:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.632 23:29:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:18.632 23:29:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.632 23:29:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:18.632 23:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:18.632 23:29:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:18.632 [2024-06-07 23:29:40.375419] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:18.632 [2024-06-07 23:29:40.375482] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.632 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.632 [2024-06-07 23:29:40.463369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.632 [2024-06-07 23:29:40.508537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:18.632 [2024-06-07 23:29:40.508687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.632 [2024-06-07 23:29:40.508696] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.632 [2024-06-07 23:29:40.508704] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.632 [2024-06-07 23:29:40.508867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.632 [2024-06-07 23:29:40.509026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.632 [2024-06-07 23:29:40.509186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:18.632 [2024-06-07 23:29:40.509187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:18.633 23:29:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:18.633 23:29:41 -- common/autotest_common.sh@852 -- # return 0 00:33:18.633 23:29:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:18.633 23:29:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 23:29:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.633 23:29:41 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 Malloc0 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 [2024-06-07 23:29:41.217804] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 [2024-06-07 23:29:41.258185] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:18.633 23:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.633 23:29:41 -- common/autotest_common.sh@10 -- # set +x 00:33:18.633 23:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.633 23:29:41 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:18.633 23:29:41 -- host/target_disconnect.sh@50 -- # reconnectpid=3057372 00:33:18.633 23:29:41 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:18.893 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.813 23:29:43 -- host/target_disconnect.sh@53 -- # kill -9 3057152 00:33:20.813 23:29:43 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Read completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 Write completed with error (sct=0, sc=8) 00:33:20.813 starting I/O failed 00:33:20.813 [2024-06-07 23:29:43.290291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.813 [2024-06-07 23:29:43.290736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.813 [2024-06-07 23:29:43.291147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.813 [2024-06-07 23:29:43.291160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.813 qpair failed and we were unable to recover it. 00:33:20.813 [2024-06-07 23:29:43.291482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.813 [2024-06-07 23:29:43.291770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.813 [2024-06-07 23:29:43.291784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.292044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.292510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.292547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.292934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.293253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.293264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.293556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.293920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.293930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.294496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.294768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.294781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.295164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.295477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.295488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.295639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.295922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.295931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.296044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.296257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.296274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.296664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.296921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.296931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.297261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.297584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.297595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.297977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.298183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.298193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.298473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.298821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.298831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.299143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.299465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.299474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.299825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.300147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.300157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.300398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.300634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.300643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.300981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.301283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.301293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.301559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.301862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.301871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.302112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.302553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.302562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.302866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.303195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.303204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.303543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.303864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.303873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.304269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.304579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.304589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.304873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.305228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.305236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.305598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.305930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.305938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.306272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.306610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.306619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.306951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.307309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.307319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.307702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.308024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.308032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.308328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.308568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.308577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.308826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.309112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.309122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.309484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.309654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.814 [2024-06-07 23:29:43.309664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.814 qpair failed and we were unable to recover it. 00:33:20.814 [2024-06-07 23:29:43.310036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.310364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.310373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.310711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.310970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.310979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.311267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.311651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.311660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.311935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.312304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.312314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.312620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.313016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.313025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.313331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.313641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.313650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.313935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.314199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.314208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.314508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.314802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.314810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.315150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.315437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.315447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.315774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.316037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.316051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.316389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.316608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.316620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.316949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.317262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.317271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.317622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.317856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.317865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.318236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.318552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.318562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.318890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.319234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.319247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.319600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.319940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.319950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.320286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.321172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.321194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.321613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.321994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.322004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.322327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.322631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.322640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.322874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.323229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.323239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.323569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.323900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.323909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.324252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.324558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.324567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.324981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.325293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.325303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.325673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.326003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.326012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.326406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.326806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.326820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.327202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.327520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.327531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.327870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.328230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.328239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.328586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.328926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.328934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.815 [2024-06-07 23:29:43.329274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.329614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.815 [2024-06-07 23:29:43.329623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.815 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.329953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.330324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.330334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.330592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.330887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.330897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.331256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.331570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.331578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.331914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.332250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.332260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.332591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.332928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.332937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.333249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.333628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.333637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.333976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.334326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.334336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.334651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.335010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.335019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.335341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.335689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.335698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.336030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.336351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.336360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.336606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.336905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.336913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.337299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.337613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.337622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.337943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.338162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.338170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.338511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.338750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.338760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.339091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.339272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.339282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.339626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.339964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.339972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.340346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.340667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.340676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.341013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.341330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.341339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.341702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.342037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.342045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.342372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.342726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.342734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.343063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.343418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.343428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.343682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.344019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.344028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.344368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.344672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.344681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.345033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.345370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.345380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.345727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.346021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.346030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.346273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.346483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.346494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.346845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.347187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.347196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.347422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.347777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.347785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.816 qpair failed and we were unable to recover it. 00:33:20.816 [2024-06-07 23:29:43.348140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.816 [2024-06-07 23:29:43.348459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.348468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.348804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.349141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.349149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.349495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.349828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.349836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.350163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.350479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.350488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.350825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.351033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.351043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.351388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.351644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.351653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.351997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.352299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.352308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.352654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.353045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.353055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.353397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.353732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.353740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.354120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.354425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.354434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.354658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.354998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.355007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.355340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.355678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.355686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.356018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.356351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.356359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.356594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.356921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.356929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.357259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.357568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.357577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.357984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.358320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.358329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.358681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.359012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.359022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.359363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.359719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.359727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.359901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.360212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.360221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.360569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.360883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.360891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.361222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.361435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.361445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.361778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.362121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.362129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.362454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.362787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.362797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.363103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.363447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.363456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.363818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.364205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.364214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.364569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.364874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.364882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.365213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.365526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.365535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.365914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.366257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.366267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.366632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.366967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.366976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.817 qpair failed and we were unable to recover it. 00:33:20.817 [2024-06-07 23:29:43.367323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.817 [2024-06-07 23:29:43.367631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.367640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.367965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.368269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.368278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.368632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.368940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.368948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.369279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.369609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.369617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.369944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.370275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.370285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.370611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.370944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.370952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.371284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.371638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.371647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.371878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.372211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.372220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.372554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.372916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.372926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.373265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.373606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.373615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.373939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.374277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.374286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.374619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.374867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.374875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.375218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.375571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.375580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.375932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.376203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.376212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.376574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.376902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.376911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.377260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.377594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.377603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.377934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.378268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.378278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.378616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.378945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.378954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.379305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.379652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.379661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.380007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.380333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.380342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.380594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.380926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.380935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.381264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.381590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.381599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.818 qpair failed and we were unable to recover it. 00:33:20.818 [2024-06-07 23:29:43.381959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.818 [2024-06-07 23:29:43.382296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.382305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.382714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.383032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.383041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.383382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.383722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.383731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.383985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.384308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.384317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.384645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.384977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.384988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.385314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.385685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.385694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.386027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.386361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.386370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.386692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.386999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.387007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.387340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.387643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.387652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.387983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.388320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.388330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.388677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.388900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.388910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.389264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.389620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.389629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.389959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.390296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.390306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.390668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.391002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.391011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.391340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.391713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.391724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.392049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.392382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.392391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.392785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.393138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.393147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.393542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.393926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.393934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.394300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.394633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.394641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.394879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.395219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.395228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.395629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.395937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.395947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.396320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.396655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.396664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.397013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.397349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.397358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.397559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.397870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.397879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.398205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.398569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.398579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.398907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.399195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.399204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.399610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.399946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.399955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.400322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.400663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.400672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.401003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.401344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.401353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.819 qpair failed and we were unable to recover it. 00:33:20.819 [2024-06-07 23:29:43.401707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.402040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.819 [2024-06-07 23:29:43.402049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.402380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.402728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.402737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.403080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.403403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.403412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.403760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.404093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.404101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.404426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.404646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.404656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.404989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.405353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.405363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.405730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.406029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.406038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.406405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.406735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.406743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.407091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.407429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.407438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.407776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.408101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.408111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.408475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.408684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.408694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.409043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.409379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.409388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.409647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.410005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.410014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.410355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.410608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.410617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.410947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.411236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.411256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.411628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.411958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.411966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.412294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.412691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.412700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.413086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.413423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.413432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.413766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.414099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.414108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.414444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.414780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.414788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.415128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.415462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.415471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.415840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.416175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.416183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.416508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.416832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.416840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.417166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.417430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.417439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.417757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.418093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.418101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.418444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.418776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.418785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.419113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.419441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.419451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.419797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.420131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.420140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.420505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.420862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.420871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.820 qpair failed and we were unable to recover it. 00:33:20.820 [2024-06-07 23:29:43.421198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.820 [2024-06-07 23:29:43.421540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.421549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.421881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.422218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.422227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.422588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.422879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.422889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.423180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.423442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.423452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.423793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.424132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.424141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.424486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.424827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.424837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.425178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.425475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.425484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.425825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.426158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.426169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.426420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.426756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.426766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.427110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.427445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.427454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.427787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.428154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.428163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.428438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.428801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.428810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.429142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.429469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.429478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.429673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.430033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.430043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.430384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.430644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.430653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.431001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.431364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.431374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.431726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.432053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.432062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.432432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.432811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.432820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.433175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.433486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.433496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.433827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.434192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.434201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.434442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.434821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.434830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.435239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.435600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.435609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.435938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.436219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.436228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.436563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.436899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.436908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.437108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.437421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.437431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.437767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.438102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.438111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.438437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.438746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.438755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.439087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.439450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.439459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.439802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.440137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.440146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.821 qpair failed and we were unable to recover it. 00:33:20.821 [2024-06-07 23:29:43.440492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.821 [2024-06-07 23:29:43.440825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.440834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.441077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.441414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.441423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.441710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.442030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.442038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.442366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.442713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.442722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.443057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.443393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.443403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.443768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.444131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.444140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.444474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.444748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.444756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.445107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.445443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.445453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.445872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.446204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.446213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.446623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.446972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.446981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.447348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.447646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.447655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.447981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.448316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.448325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.448653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.448942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.448951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.449293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.449654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.449663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.450030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.450393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.450402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.450732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.451065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.451074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.451402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.451744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.451753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.452066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.452398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.452407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.452689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.453022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.453031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.453359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.453696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.453705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.454032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.454194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.454203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.454440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.454736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.454745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.455098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.455443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.455452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.455817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.456129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.456139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.456471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.456820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.822 [2024-06-07 23:29:43.456829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.822 qpair failed and we were unable to recover it. 00:33:20.822 [2024-06-07 23:29:43.457171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.457471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.457481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.457819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.458079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.458088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.458478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.458818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.458827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.459172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.459482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.459491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.459821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.460154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.460165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.460561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.460900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.460908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.461279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.461598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.461607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.461937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.462246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.462255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.462579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.462919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.462928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.463291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.463603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.463612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.463984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.464342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.464352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.464697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.464968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.464977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.465319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.465671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.465680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.465994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.466285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.466294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.466687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.467029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.467038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.467446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.467812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.467821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.468071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.468405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.468414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.468629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.468964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.468973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.469307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.469658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.469666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.470000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.470341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.470351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.470714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.471013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.471022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.471389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.471748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.471757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.472104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.472445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.472454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.472775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.473137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.473145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.473504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.473842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.473851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.474172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.474505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.474516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.474896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.475229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.475238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.475626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.475977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.475985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.823 [2024-06-07 23:29:43.476320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.476662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.823 [2024-06-07 23:29:43.476671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.823 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.476915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.477326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.477336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.477687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.478059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.478068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.478417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.478769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.478777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.479101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.479445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.479455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.479708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.480046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.480055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.480387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.480779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.480787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.481029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.481346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.481355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.481710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.482046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.482055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.482374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.482724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.482733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.483101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.483312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.483322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.483687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.484062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.484070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.484405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.484769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.484777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.485126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.485393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.485402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.485728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.486085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.486094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.486535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.486872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.486881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.487223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.487595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.824 [2024-06-07 23:29:43.487605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:20.824 qpair failed and we were unable to recover it. 00:33:20.824 [2024-06-07 23:29:43.487968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.488350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.488361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.488805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.488993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.489003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.489366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.489698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.489708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.489926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.490287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.490296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.490624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.490974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.490984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.491354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.491585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.491594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.491790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.492128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.492138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.492501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.492777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.492786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.493030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.493371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.493380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.493712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.494059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.494067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.494366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.494751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.494762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.495086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.495457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.495467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.495828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.496151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.496159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.496487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.496827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.496835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.497158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.497524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.497534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.497880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.498256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.498264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.094 [2024-06-07 23:29:43.498447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.498842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.094 [2024-06-07 23:29:43.498850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.094 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.499176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.499534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.499544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.499891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.500220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.500229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.500628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.500977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.500986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.501314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.501656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.501665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.502027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.502272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.502282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.502596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.502966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.502974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.503338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.503685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.503694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.504057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.504413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.504422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.504859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.505184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.505193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.505609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.505992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.506002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.506348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.506716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.506724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.507044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.507424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.507434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.507750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.508115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.508124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.508570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.508896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.508905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.509269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.509593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.509603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.509948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.510314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.510324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.510658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.510885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.510894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.511254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.511500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.511510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.511868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.512199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.512208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.512590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.512943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.512951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.513299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.513608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.513616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.513984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.514314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.514323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.514695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.515045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.515055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.515405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.515774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.515783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.516142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.516511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.516520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.516866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.517216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.517225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.517619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.517946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.095 [2024-06-07 23:29:43.517955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.095 qpair failed and we were unable to recover it. 00:33:21.095 [2024-06-07 23:29:43.518283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.518592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.518601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.518968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.519319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.519328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.519688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.520056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.520065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.520427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.520774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.520783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.521125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.521441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.521450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.521768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.522150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.522160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.522507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.522843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.522852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.523234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.523587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.523597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.523968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.524337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.524346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.524682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.525036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.525045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.525390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.525740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.525749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.526028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.526417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.526426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.526769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.527126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.527135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.527533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.527891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.527899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.528213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.528589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.528598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.528964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.529293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.529302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.529646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.529914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.529923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.530287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.530542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.530553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.530908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.531293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.531302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.531636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.531941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.531950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.532307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.532558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.532567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.532904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.533158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.533167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.533523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.533893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.533901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.534305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.534636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.534644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.534964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.535310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.535319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.535647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.536004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.536013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.536316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.536674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.536682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.096 [2024-06-07 23:29:43.536983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.537334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.096 [2024-06-07 23:29:43.537344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.096 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.537590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.537940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.537948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.538319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.538660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.538669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.539077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.539410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.539419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.539661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.540018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.540026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.540342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.540714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.540723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.541057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.541293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.541302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.541725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.542059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.542068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.542414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.542794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.542804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.543108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.543535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.543544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.543791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.544139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.544147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.544493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.544863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.544871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.545109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.545454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.545463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.545794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.546136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.546145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.546281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.546608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.546617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.546766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.547071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.547080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.547435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.547796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.547806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.548132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.548497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.548507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.548835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.549162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.549171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.549516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.549633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.549641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.549852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.550262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.550272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.550643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.550886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.550895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.551135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.551476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.551486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.551811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.552015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.552024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.552381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.552751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.552760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.553073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.553445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.553455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.553692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.553923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.553932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.554285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.554626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.554635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.554967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.555193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.555203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.097 [2024-06-07 23:29:43.555513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.555712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.097 [2024-06-07 23:29:43.555721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.097 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.556168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.556393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.556403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.556785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.557125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.557135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.557471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.557810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.557819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.558185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.558529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.558538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.558885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.559257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.559268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.559633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.559973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.559981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.560325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.560702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.560711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.561086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.561422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.561431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.561569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.561932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.561940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.562296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.562642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.562651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.562978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.563338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.563348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.563709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.564054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.564063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.564407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.564786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.564795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.565147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.565497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.565506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.565850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.566054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.566064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.566414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.566756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.566765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.567102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.567460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.567470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.567837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.568144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.568153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.568495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.568837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.568846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.569143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.569492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.569501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.569749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.570129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.570137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.570513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.570886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.570897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.571142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.571496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.571505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.571857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.572214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.572224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.572571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.572951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.572961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.573313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.573532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.573541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.573798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.574035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.574044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.098 qpair failed and we were unable to recover it. 00:33:21.098 [2024-06-07 23:29:43.574399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.574755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.098 [2024-06-07 23:29:43.574764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.575095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.575288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.575298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.575635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.575970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.575979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.576336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.576682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.576690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.577043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.577278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.577287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.577533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.577892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.577901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.578257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.578611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.578620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.578990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.579366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.579375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.579738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.580086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.580095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.580506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.580851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.580860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.581230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.581594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.581604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.581938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.582321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.582330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.582693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.582853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.582863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.583290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.583638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.583647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.584015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.584397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.584406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.584769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.585147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.585156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.585498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.585854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.585863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.586221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.586574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.586583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.586684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.586891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.586901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.587269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.587494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.587503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.587828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.588204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.588214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.588544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.588879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.588888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.589238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.589576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.589585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.589945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.590312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.590322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.590731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.590930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.590940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.099 qpair failed and we were unable to recover it. 00:33:21.099 [2024-06-07 23:29:43.591168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.591534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.099 [2024-06-07 23:29:43.591543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.591753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.592128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.592136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.592349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.592710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.592719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.592912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.593129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.593138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.593467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.593682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.593692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.593944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.594349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.594359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.594701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.595054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.595064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.595416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.595762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.595771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.595994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.596369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.596379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.596589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.596946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.596954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.597178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.597503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.597512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.597838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.598168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.598177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.598519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.598754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.598763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.599120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.599489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.599499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.599801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.600164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.600172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.600504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.600664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.600672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.601019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.601345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.601354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.601573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.601947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.601956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.602280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.602633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.602641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.602969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.603329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.603339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.603687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.603910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.603923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.604300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.604666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.604674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.605018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.605376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.605385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.605730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.606122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.606131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.606369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.606743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.606752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.607100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.607442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.607451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.607781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.608154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.608163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.608401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.608777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.608785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.609122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.609449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.609459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.100 qpair failed and we were unable to recover it. 00:33:21.100 [2024-06-07 23:29:43.609805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.100 [2024-06-07 23:29:43.610138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.610147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.610419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.610793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.610801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.611124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.611455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.611465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.611712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.612066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.612075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.612477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.612846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.612855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.613103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.613488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.613497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.613866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.614220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.614229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.614622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.614956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.614965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.615318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.615667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.615676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.615973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.616217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.616225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.616459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.616792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.616800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.617164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.617510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.617519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.617886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.618235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.618254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.618571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.618935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.618944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.619291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.619600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.619609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.619865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.620202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.620212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.620559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.620921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.620930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.621277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.621615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.621624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.621991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.622365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.622374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.622717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.622980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.622989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.623316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.623698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.623707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.624056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.624397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.624406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.624762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.624991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.625000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.625328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.625560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.625568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.625942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.626314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.626323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.626641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.626981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.626989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.627320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.627588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.627596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.627941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.628319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.628328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.628673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.629017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.101 [2024-06-07 23:29:43.629025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.101 qpair failed and we were unable to recover it. 00:33:21.101 [2024-06-07 23:29:43.629348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.629719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.629727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.630071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.630414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.630424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.630757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.631107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.631116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.631450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.631830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.631839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.632165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.632568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.632577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.632911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.633115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.633126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.633546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.634023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.634037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.634405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.634689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.634698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.635047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.635433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.635443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.635802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.636181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.636189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.636567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.636923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.636932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.637265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.637587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.637596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.637945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.638314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.638323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.638678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.639036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.639047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.639389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.639723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.639731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.640062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.640362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.640371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.640736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.641109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.641118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.641452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.641799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.641808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.642175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.642524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.642533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.642866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.643250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.643260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.643496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.643852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.643860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.644218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.644568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.644577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.644944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.645310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.645319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.645673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.645968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.645979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.646305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.646685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.646694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.647057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.647379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.647388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.647744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.648095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.648104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.648446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.648770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.648780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.102 [2024-06-07 23:29:43.649132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.649484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.102 [2024-06-07 23:29:43.649493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.102 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.649830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.650209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.650219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.650437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.650791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.650799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.651170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.651419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.651429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.651814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.652167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.652177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.652506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.652843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.652852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.653227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.653589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.653599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.653938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.654316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.654327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.654675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.654911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.654920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.655274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.655635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.655643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.655978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.656353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.656362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.656719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.657092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.657101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.657477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.657861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.657870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.658207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.658555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.658565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.658927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.659128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.659138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.659497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.659875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.659884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.660208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.660576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.660585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.660938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.661161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.661171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.661433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.661729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.661739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.662076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.662430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.662439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.662804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.663176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.663185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.663554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.663892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.663901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.664236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.664575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.664584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.664949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.665286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.665296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.103 qpair failed and we were unable to recover it. 00:33:21.103 [2024-06-07 23:29:43.665630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.665992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.103 [2024-06-07 23:29:43.666001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.666404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.666698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.666706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.667108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.667448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.667458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.667803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.668134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.668144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.668511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.668886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.668895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.669240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.669577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.669586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.669919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.670277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.670286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.670517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.670892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.670901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.671228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.671585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.671594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.671941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.672274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.672286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.672630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.673004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.673013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.673336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.673709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.673717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.674039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.674352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.674362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.674675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.675017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.675025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.675360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.675623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.675632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.675955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.676323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.676332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.676664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.677031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.677039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.677425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.677766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.677775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.678027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.678369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.678378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.678714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.679070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.679079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.679425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.679826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.679834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.680159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.680493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.680503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.680877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.681211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.681222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.681595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.681932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.681940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.682254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.682601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.682611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.682903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.683281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.683290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.683671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.684024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.684033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.684387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.684649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.684658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.685069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.685405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.685414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.104 qpair failed and we were unable to recover it. 00:33:21.104 [2024-06-07 23:29:43.685746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.686048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.104 [2024-06-07 23:29:43.686065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.686298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.686638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.686647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.686971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.687332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.687342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.687611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.687959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.687968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.688351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.688704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.688713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.689080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.689409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.689418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.689764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.690104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.690113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.690380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.690713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.690722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.691007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.691339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.691348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.691676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.692034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.692043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.692388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.692632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.692640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.692968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.693328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.693337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.693729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.694100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.694108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.694384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.694762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.694771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.695075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.695416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.695426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.695740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.696083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.696092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.696454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.696803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.696812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.696953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.697329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.697338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.697684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.698023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.698032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.698360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.698737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.698746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.699066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.699264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.699273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.699616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.699956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.699965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.700314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.700635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.700644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.700975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.701261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.701270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.701617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.701946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.701954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.702324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.702599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.702608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.702975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.703172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.703181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.703427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.703786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.703795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.704143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.704534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.704543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.105 qpair failed and we were unable to recover it. 00:33:21.105 [2024-06-07 23:29:43.704942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.105 [2024-06-07 23:29:43.705298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.705307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.705626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.706003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.706011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.706335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.706707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.706716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.706903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.707273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.707282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.707622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.707870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.707879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.708223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.708557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.708567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.708890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.709224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.709233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.709558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.709913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.709921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.710272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.710628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.710636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.711000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.711398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.711407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.711794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.712148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.712157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.712457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.712823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.712833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.713197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.713522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.713531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.713868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.714237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.714249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.714608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.714855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.714864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.715224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.715596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.715607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.715936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.716296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.716305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.716667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.716997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.717006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.717353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.717715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.717723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.717940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.718321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.718330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.718591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.718953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.718961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.719329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.719665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.719674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.720020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.720353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.720362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.720709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.721048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.721056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.721381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.721718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.721727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.722051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.722256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.722266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.722590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.722944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.722953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.723322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.723668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.723678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.724046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.724424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.724433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.106 qpair failed and we were unable to recover it. 00:33:21.106 [2024-06-07 23:29:43.724799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.106 [2024-06-07 23:29:43.725169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.725178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.725519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.725871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.725880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.726226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.726590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.726599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.726935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.727255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.727264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.727613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.727964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.727973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.728269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.728639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.728647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.728976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.729326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.729335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.729706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.730029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.730038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.730391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.730717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.730726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.731088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.731417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.731426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.731769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.732133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.732142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.732532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.732863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.732872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.733217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.733566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.733575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.733938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.734271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.734280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.734629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.735014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.735023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.735349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.735621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.735629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.735955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.736327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.736336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.736670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.737027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.737037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.737283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.737577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.737585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.737919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.738276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.738285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.738629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.738958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.738967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.739313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.739651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.739659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.739984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.740341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.740350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.740693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.741041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.741049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.741414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.741734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.741743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.742117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.742470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.742478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.742813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.743159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.743168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.743512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.743838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.743848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.744240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.744519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.744528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.107 qpair failed and we were unable to recover it. 00:33:21.107 [2024-06-07 23:29:43.744858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.107 [2024-06-07 23:29:43.745221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.745230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.745580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.745890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.745899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.746269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.746626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.746634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.746981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.747332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.747341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.747691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.748018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.748027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.748377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.748731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.748739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.749072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.749428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.749437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.749780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.750142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.750150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.750336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.750750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.750761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.751125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.751376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.751385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.751730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.752079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.752087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.752469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.752857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.752865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.753146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.753457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.753466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.753766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.754112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.754120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.754468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.754831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.754839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.755141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.755508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.755517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.755864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.756225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.756234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.756461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.756821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.756830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.757162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.757452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.757461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.757810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.758189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.758198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.758537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.758914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.758923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.759272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.759599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.759607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.759979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.760310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.760319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.760670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.761023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.761031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.108 [2024-06-07 23:29:43.761312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.761701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.108 [2024-06-07 23:29:43.761709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.108 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.761941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.762321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.762331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.762712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.762952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.762961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.763309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.763652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.763661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.764041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.764407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.764416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.764782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.765132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.765140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.109 [2024-06-07 23:29:43.765498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.765867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.109 [2024-06-07 23:29:43.765876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.109 qpair failed and we were unable to recover it. 00:33:21.377 [2024-06-07 23:29:43.766201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.766419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.766430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.766757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.767139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.767149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.767512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.767856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.767866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.768234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.768585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.768594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.768964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.769336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.769345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.769678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.770032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.770041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.770363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.770711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.770720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.771052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.771411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.771420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.771801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.772100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.772109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.772474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.772860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.772869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.773195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.773604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.773613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.773983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.774360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.774370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.774699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.775034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.775043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.775357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.775694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.775703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.776034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.776337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.776346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.776714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.777120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.777129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.777459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.777832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.777841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.778170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.778420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.778429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.778739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.779071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.779080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.779440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.779648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.779656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.779979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.780314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.780323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.780627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.780997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.781005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.781327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.781702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.781711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.782074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.782450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.782459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.782828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.783180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.783189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.783522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.783839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.783847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.784195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.784533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.784542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.784792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.785132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.378 [2024-06-07 23:29:43.785141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.378 qpair failed and we were unable to recover it. 00:33:21.378 [2024-06-07 23:29:43.785445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.785850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.785861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.786182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.786535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.786544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.786883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.787213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.787221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.787688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.788064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.788073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.788398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.788741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.788750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.789120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.789528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.789537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.789877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.790237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.790249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.790575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.790891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.790899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.791256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.791483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.791492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.791833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.792202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.792211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.792579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.792915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.792926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.793150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.793470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.793479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.793826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.794127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.794135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.794514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.794733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.794742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.795090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.795420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.795429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.795753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.796060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.796069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.796306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.796601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.796610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.796960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.797249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.797258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.797478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.797820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.797828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.798063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.798415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.798424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.798751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.799053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.799062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.799376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.799736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.799745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.800093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.800260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.800269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.800619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.800838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.800847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.801198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.801532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.801541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.801788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.802103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.802112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.802456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.802796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.802805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.803136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.803444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.803453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.379 [2024-06-07 23:29:43.803817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.804192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.379 [2024-06-07 23:29:43.804200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.379 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.804487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.804839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.804847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.805169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.805514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.805522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.805987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.806344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.806353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.806718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.807067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.807076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.807431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.807775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.807784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.808120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.808514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.808524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.808904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.809258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.809268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.809519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.809899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.809907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.810318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.810546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.810554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.810972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.811183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.811192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.811512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.811859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.811868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.812296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.812542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.812551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.812805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.813162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.813171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.813507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.813802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.813810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.814048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.814395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.814404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.814806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.815116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.815132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.815474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.815882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.815891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.816217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.816579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.816589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.816961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.817337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.817346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.817705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.817926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.817935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.818286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.818668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.818678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.819026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.819268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.819278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.819632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.819970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.819979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.820165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.820571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.820580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.820907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.821126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.821134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.821492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.821847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.821856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.822231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.822532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.822542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.822891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.823091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.823101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.380 qpair failed and we were unable to recover it. 00:33:21.380 [2024-06-07 23:29:43.823348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.823588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.380 [2024-06-07 23:29:43.823597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.823850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.824202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.824211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.824460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.824839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.824847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.825248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.825472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.825481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.825850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.826184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.826201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.826418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.826798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.826808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.827235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.827593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.827603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.827976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.828196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.828205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.828597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.828937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.828946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.829309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.829538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.829548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.829879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.830213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.830223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.830631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.830965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.830973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.831315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.831590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.831599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.832003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.832334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.832343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.832553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.832928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.832937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.833279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.833622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.833631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.833922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.834258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.834267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.834590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.834925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.834934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.835271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.835618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.835627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.835850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.836226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.836234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.836576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.836931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.836940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.837294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.837554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.837563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.837931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.838264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.838273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.838628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.838846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.838855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.839225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.839563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.839573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.839893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.840256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.840266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.840599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.840822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.840832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.841182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.841429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.841438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.841773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.842151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.842160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.381 qpair failed and we were unable to recover it. 00:33:21.381 [2024-06-07 23:29:43.842486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.842814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.381 [2024-06-07 23:29:43.842824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.843195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.843509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.843519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.843723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.844067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.844077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.844435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.844766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.844775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.845124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.845483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.845492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.845860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.846234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.846246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.846571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.846918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.846926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.847291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.847654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.847663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.848008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.848361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.848370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.848694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.849044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.849052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.849419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.849771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.849780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.850124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.850532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.850541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.850890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.851098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.851108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.851458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.851793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.851801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.852158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.852454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.852463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.852836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.853174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.853183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.853527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.853866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.853874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.854252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.854575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.854585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.854886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.855313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.855322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.855649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.855988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.855997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.856231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.856480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.856490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.856742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.857094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.857104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.857306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.857654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.857663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.858026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.858369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.858378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.382 [2024-06-07 23:29:43.858733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.859079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.382 [2024-06-07 23:29:43.859088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.382 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.859445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.859699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.859708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.860054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.860306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.860319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.860675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.861047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.861055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.861424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.861778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.861787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.862133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.862490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.862499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.862825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.863172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.863180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.863532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.863934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.863943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.864283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.864615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.864623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.864954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.865300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.865309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.865665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.866040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.866049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.866285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.866635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.866643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.866968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.867327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.867337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.867714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.868048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.868056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.868438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.868806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.868814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.869039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.869401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.869410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.869759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.870118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.870127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.870514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.870866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.870875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.871224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.871572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.871582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.871908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.872289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.872298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.872615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.872884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.872893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.873258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.873619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.873628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.873851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.874208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.874217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.874595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.874891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.874900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.875240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.875619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.875628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.875993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.876358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.876367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.876717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.877052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.877062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.877393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.877733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.877742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.878025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.878398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.878407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.383 qpair failed and we were unable to recover it. 00:33:21.383 [2024-06-07 23:29:43.878725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.383 [2024-06-07 23:29:43.879100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.879108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.879441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.879818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.879827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.880174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.880516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.880525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.880772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.881102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.881111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.881476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.881821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.881829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.882178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.882372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.882382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.882744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.883079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.883088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.883385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.883743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.883751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.884086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.884443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.884452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.884820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.885151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.885160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.885525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.885892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.885901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.886249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.886588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.886597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.886977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.887320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.887329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.887701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.888035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.888044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.888393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.888737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.888746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.889091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.889282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.889292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.889619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.889992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.890001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.890329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.890708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.890716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.891043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.891294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.891303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.891650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.892008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.892017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.892391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.892727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.892735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.893116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.893449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.893459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.893757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.894100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.894108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.894439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.894796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.894804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.895172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.895527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.895538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.895909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.896246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.896256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.896607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.896935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.896944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.897216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.897592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.897601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.897928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.898273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.898282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.384 qpair failed and we were unable to recover it. 00:33:21.384 [2024-06-07 23:29:43.898731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.384 [2024-06-07 23:29:43.898962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.898971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.899327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.899693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.899702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.900030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.900401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.900410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.900772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.901126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.901134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.901459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.901792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.901801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.902124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.902487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.902500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.902788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.903167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.903176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.903529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.903894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.903902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.904230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.904478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.904487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.904894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.905234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.905245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.905593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.905915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.905924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.906279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.906497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.906505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.906856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.907187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.907196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.907555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.907894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.907903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.908151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.908460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.908470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.908789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.909007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.909016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.909352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.909720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.909728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.910044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.910378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.910387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.910754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.911108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.911117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.911449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.911787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.911796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.912155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.912499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.912508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.912876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.913225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.913234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.913587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.913960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.913969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.914293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.914479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.914489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.914809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.915151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.915159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.915491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.915867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.915876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.916145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.916493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.916502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.916744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.917076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.917084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.917420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.917787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.917796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.385 qpair failed and we were unable to recover it. 00:33:21.385 [2024-06-07 23:29:43.918141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.385 [2024-06-07 23:29:43.918505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.918514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.918879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.919282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.919292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.919633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.920010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.920019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.920170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.920427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.920435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.920773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.921145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.921153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.921506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.921876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.921885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.922285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.922486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.922496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.922800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.923173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.923182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.923527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.923871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.923879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.924221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.924592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.924602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.924949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.925328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.925337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.925707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.926037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.926047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.926282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.926503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.926512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.926860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.927199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.927208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.927572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.927948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.927956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.928318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.928694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.928702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.929055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.929394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.929403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.929762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.930094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.930103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.930463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.930804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.930812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.931176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.931515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.931524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.931803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.932177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.932186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.932544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.932913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.932922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.933294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.933616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.933625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.933926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.934290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.934299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.934628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.935013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.935022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.935422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.935807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.935815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.936141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.936477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.936486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.936833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.937209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.937220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.937624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.938035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.938044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.386 qpair failed and we were unable to recover it. 00:33:21.386 [2024-06-07 23:29:43.938373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.386 [2024-06-07 23:29:43.938617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.938626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.938996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.939375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.939384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.939722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.940087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.940095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.940453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.940812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.940821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.941145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.941479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.941489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.941851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.942186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.942194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.942552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.942921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.942930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.943276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.943604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.943613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.943954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.944313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.944323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.944527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.944809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.944817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.945159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.945503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.945512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.945858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.946236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.946252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.946549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.946913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.946922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.947255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.947607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.947615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.947940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.948299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.948308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.948590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.948824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.948832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.949163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.949469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.949478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.949670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.950070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.950078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.950408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.950744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.950752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.951118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.951491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.951500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.951821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.952180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.952189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.952527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.952891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.952899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.953225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.953600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.953609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.953977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.954363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.954372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.954733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.954990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.954999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.387 qpair failed and we were unable to recover it. 00:33:21.387 [2024-06-07 23:29:43.955321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.955662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.387 [2024-06-07 23:29:43.955670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.955996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.956353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.956362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.956722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.957071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.957079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.957413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.957733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.957743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.957971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.958321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.958330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.958701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.959037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.959046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.959424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.959761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.959771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.960121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.960494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.960503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.960866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.961246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.961255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.961590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.961939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.961947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.962275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.962615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.962624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.962988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.963321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.963330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.963705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.964066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.964075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.964423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.964759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.964767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.965105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.965451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.965460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.965805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.966139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.966148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.966495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.966861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.966871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.967210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.967620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.967630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.967972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.968354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.968363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.968736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.969108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.969116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.969444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.969815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.969824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.970153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.970349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.970358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.970764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.971015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.971025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.971373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.971730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.971738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.972075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.972429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.972440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.972790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.973157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.973166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.973504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.973866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.973875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.974224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.974648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.974658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.975025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.975406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.975420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.388 qpair failed and we were unable to recover it. 00:33:21.388 [2024-06-07 23:29:43.975770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.388 [2024-06-07 23:29:43.976129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.976138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.976463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.976844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.976853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.977207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.977507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.977516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.977888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.978233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.978245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.978583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.978934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.978942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.979290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.979660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.979669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.979991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.980297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.980306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.980680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.981013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.981022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.981366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.981675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.981683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.982040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.982437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.982446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.982772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.983135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.983143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.983466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.983841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.983850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.984171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.984536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.984545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.984891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.985247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.985257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.985617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.985926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.985935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.986292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.986634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.986643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.987027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.987376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.987387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.987799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.988132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.988141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.988436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.988809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.988818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.989188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.989525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.989534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.989918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.990250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.990259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.990614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.990940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.990949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.991270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.991578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.991586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.991951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.992162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.992172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.992418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.992761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.992770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.993101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.993320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.993329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.993692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.994038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.994046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.994254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.994531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.994540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.994878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.995208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.995217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.389 qpair failed and we were unable to recover it. 00:33:21.389 [2024-06-07 23:29:43.995652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.389 [2024-06-07 23:29:43.995984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.995993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.996347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.996726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.996735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.997063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.997385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.997395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.997640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.998021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.998030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.998353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.998698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.998707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.999038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.999415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:43.999424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:43.999735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.000090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.000099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.000300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.000668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.000677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.001043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.001383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.001392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.001748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.002142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.002150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.002488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.002835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.002844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.003166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.003585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.003594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.003944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.004276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.004285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.004463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.004878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.004887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.005214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.005579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.005588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.005933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.006115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.006125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.006484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.006865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.006875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.007223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.007584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.007597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.007944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.008325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.008334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.008780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.009103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.009112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.009482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.009836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.009845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.010079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.010446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.010456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.010829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.011166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.011175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.011454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.011821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.011830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.012154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.012527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.012536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.012881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.013236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.013248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.013575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.013933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.013941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.014210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.014564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.014573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.014900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.015279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.015289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.390 qpair failed and we were unable to recover it. 00:33:21.390 [2024-06-07 23:29:44.015633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.390 [2024-06-07 23:29:44.015989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.015998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.016341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.016744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.016752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.017070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.017433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.017442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.017797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.018148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.018156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.018496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.018856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.018865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.019212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.019584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.019593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.019917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.020290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.020299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.020627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.020979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.020988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.021312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.021687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.021696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.022061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.022406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.022415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.022744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.023007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.023016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.023348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.023668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.023683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.023924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.024265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.024274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.024677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.024972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.024981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.025342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.025679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.025688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.025931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.026266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.026275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.026710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.027089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.027099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.027314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.027555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.027564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.027909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.028267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.028276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.028631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.029011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.029020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.029369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.029723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.029732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.030181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.030528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.030537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.030879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.031234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.031249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.031585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.031916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.031925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.032292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.032629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.391 [2024-06-07 23:29:44.032638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.391 qpair failed and we were unable to recover it. 00:33:21.391 [2024-06-07 23:29:44.033005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.033335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.033344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.033709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.034053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.034061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.034410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.034791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.034801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.035150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.035493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.035502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.035885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.036224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.036232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.036590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.036966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.036975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.037300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.037576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.037584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.037908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.038273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.038282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.038634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.038949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.038957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.039254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.039586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.039594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.039959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.040290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.040299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.040647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.041013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.041022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.041272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.041619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.041628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.041908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.042253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.042263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.042592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.042963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.042974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.043336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.043675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.043684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.043940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.044182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.044191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.044546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.044880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.044888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.045239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.045589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.045598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.045846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.046162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.046172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.046443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.046823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.046832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.047183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.047431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.047440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.047858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.048259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.048268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.048621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.048947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.048955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.049325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.049588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.049599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.049898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.050283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.392 [2024-06-07 23:29:44.050292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.392 qpair failed and we were unable to recover it. 00:33:21.392 [2024-06-07 23:29:44.050632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.050888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.050898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.051252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.051553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.051562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.051906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.052251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.052261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.052670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.053008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.053017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.053340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.053708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.053717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.054042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.054254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.054263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.054643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.055024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.055033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.055215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.055558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.055571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.055826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.056199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.056208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.056571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.056937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.056946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.057323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.057672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.057680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.058066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.058317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.058326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.058680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.059012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.059021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.059297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.059640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.059648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.059993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.060192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.060201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.060604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.060977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.060985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.061319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.061698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.061706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.062037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.062433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.062443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.062785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.063151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.063159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.063581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.064004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.064012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.064341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.064596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.064605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.663 qpair failed and we were unable to recover it. 00:33:21.663 [2024-06-07 23:29:44.064939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.065294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.663 [2024-06-07 23:29:44.065303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.065632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.066008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.066016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.066437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.066800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.066808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.067137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.067398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.067406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.067754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.068077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.068085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.068442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.068665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.068674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.068899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.069230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.069239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.069593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.069929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.069937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.070285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.070634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.070643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.070975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.071331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.071340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.071703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.072073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.072083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.072441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.072772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.072781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.073145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.073501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.073510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.073850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.074240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.074253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.074589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.074922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.074931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.075220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.075434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.075444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.075801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.076035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.076044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.076388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.076696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.076704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.077029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.077237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.077249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.077615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.077902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.077910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.078256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.078617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.078626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.078953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.079362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.079372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.079603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.079831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.079839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.080250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.080604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.080613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.080963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.081326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.081335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.081665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.082008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.082017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.082337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.082714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.082722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.083068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.083405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.083414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.083767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.084146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.084156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.664 qpair failed and we were unable to recover it. 00:33:21.664 [2024-06-07 23:29:44.084506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.084726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.664 [2024-06-07 23:29:44.084735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.085068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.085368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.085377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.085676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.085883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.085892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.086150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.086499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.086509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.086715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.087091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.087099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.087290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.087550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.087559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.087814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.088195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.088204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.088434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.088782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.088791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.089117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.089459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.089469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.089694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.090042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.090051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.090404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.090750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.090759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.090958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.091201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.091209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.091681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.091918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.091928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.092275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.092588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.092597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.092936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.093189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.093198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.093292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.093606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.093614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.093983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.094364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.094373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.094811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.095137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.095146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.095506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.095873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.095882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.096209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.096451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.096461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.096805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.097189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.097199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.097554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.097777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.097786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.098109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.098455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.098464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.098812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.099178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.099187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.099526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.099873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.099882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.100203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.100561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.100570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.100911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.101248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.101258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.101629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.101961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.101969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.102331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.102698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.102706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.665 qpair failed and we were unable to recover it. 00:33:21.665 [2024-06-07 23:29:44.103069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.665 [2024-06-07 23:29:44.103411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.103421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.103792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.104029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.104038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.104395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.104634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.104643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.105007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.105343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.105352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.105568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.105939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.105949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.106145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.106383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.106392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.106805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.107133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.107141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.107544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.107926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.107935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.108276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.108635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.108644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.108901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.109271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.109280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.109610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.109914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.109923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.110275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.110617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.110626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.111033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.111364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.111374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.111725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.112095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.112104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.112476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.112826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.112836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.113147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.113585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.113594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.113925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.114231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.114241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.114603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.114936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.114945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.115275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.115615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.115624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.115857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.116229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.116238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.116484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.116855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.116863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.117233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.117615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.117626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.117870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.118236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.118248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.118598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.118986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.118994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.119340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.119719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.119728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.120073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.120451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.120459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.120805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.121182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.121190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.121533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.121853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.121862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.122124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.122340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.122349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.666 qpair failed and we were unable to recover it. 00:33:21.666 [2024-06-07 23:29:44.122533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.122854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.666 [2024-06-07 23:29:44.122864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.123074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.123267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.123277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.123562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.123928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.123937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.124294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.124657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.124666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.125007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.125346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.125356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.125715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.126055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.126065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.126441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.126795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.126805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.127237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.127578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.127588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.127885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.128138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.128148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.128469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.128691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.128701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.129055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.129368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.129378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.129728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.129902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.129912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.130273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.130624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.130634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.130844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.131181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.131190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.131407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.131784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.131793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.132181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.132522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.132532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.132884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.132954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.132963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.133300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.133512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.133521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.133892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.134248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.134258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.134553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.134902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.134911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.135250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.135596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.135604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.136014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.136349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.136358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.136686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.137044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.137053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.137395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.137724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.137733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.667 [2024-06-07 23:29:44.138057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.138389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.667 [2024-06-07 23:29:44.138399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.667 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.138753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.139084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.139093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.139485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.139840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.139849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.140203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.140488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.140498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.140846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.141180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.141189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.141507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.141798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.141808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.142137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.142493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.142502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.142833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.143183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.143191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.143519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.143836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.143844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.144229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.144483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.144493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.144707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.145069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.145078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.145442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.145808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.145817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.146142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.146467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.146477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.146871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.147247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.147256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.147581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.147899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.147908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.148238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.148598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.148607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.148930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.149295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.149304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.149650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.149980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.149989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.150362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.150597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.150606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.151015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.151348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.151359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.151711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.152084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.152093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.152421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.152778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.152786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.153114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.153455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.153464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.153802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.154166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.154175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.154534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.154901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.154911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.155261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.155566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.155575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.155910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.156268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.156278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.156628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.157027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.157036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.157401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.157774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.157783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.668 qpair failed and we were unable to recover it. 00:33:21.668 [2024-06-07 23:29:44.158162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.158470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.668 [2024-06-07 23:29:44.158481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.158812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.159096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.159104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.159452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.159818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.159826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.160151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.160553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.160562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.160753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.161131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.161139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.161533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.161872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.161880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.162127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.162484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.162493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.162822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.163176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.163185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.163493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.163742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.163751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.164112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.164322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.164332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.164689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.165020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.165028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.165329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.165708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.165717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.166085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.166440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.166449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.166792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.167109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.167118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.167454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.167834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.167843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.168157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.168457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.168466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.168790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.169104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.169112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.169441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.169792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.169801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.170031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.170401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.170410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.170759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.171132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.171140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.171471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.171842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.171851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.172184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.172546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.172555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.172937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.173276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.173286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.173609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.173873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.173881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.174228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.174562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.174571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.174940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.175293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.175302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.175660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.176038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.176047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.176410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.176788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.176796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.177101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.177443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.177453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.669 qpair failed and we were unable to recover it. 00:33:21.669 [2024-06-07 23:29:44.177780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.178166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.669 [2024-06-07 23:29:44.178175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.178545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.178906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.178916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.179261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.179597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.179606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.179973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.180307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.180316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.180705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.181053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.181062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.181221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.181565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.181574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.181929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.182275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.182284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.182629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.182996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.183005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.183371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.183750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.183758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.184104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.184456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.184465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.184787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.185152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.185161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.185577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.185944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.185953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.186327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.186666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.186675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.187032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.187380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.187389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.187723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.188089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.188098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.188444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.188651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.188661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.188969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.189331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.189340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.189709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.190087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.190096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.190445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.190806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.190814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.191136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.191417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.191426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.191827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.192131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.192139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.192566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.192914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.192922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.193290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.193642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.193653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.193986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.194365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.194374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.194698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.195052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.195060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.195406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.195790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.195799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.196148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.196492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.196501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.196831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.197185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.197194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.197629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.197983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.197991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.670 qpair failed and we were unable to recover it. 00:33:21.670 [2024-06-07 23:29:44.198350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.198610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.670 [2024-06-07 23:29:44.198619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.198920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.199281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.199290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.199617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.199891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.199901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.200257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.200617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.200626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.200958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.201292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.201302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.201696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.202059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.202069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.202417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.202767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.202777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.203119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.203454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.203463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.203790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.204119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.204128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.204378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.204722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.204731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.205079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.205423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.205433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.205841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.206192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.206201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.206547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.206903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.206912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.207245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.207623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.207632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.207990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.208361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.208370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.208741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.209117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.209127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.209479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.209781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.209790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.210113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.210484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.210494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.210867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.211247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.211257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.211584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.211974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.211984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.212317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.212527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.212537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.212882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.213219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.213229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.213476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.213860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.213870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.214249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.214598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.214608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.214938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.215264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.215276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.215592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.215931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.215941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.216313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.216694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.216703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.217152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.217476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.217485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.217765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.218144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.218154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.671 [2024-06-07 23:29:44.218518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.218888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.671 [2024-06-07 23:29:44.218897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.671 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.219249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.219587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.219595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.219963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.220332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.220342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.220666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.221023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.221032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.221375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.221642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.221650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.222019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.222357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.222366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.222733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.223110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.223119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.223484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.223854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.223863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.224227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.224600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.224609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.224973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.225306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.225316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.225686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.226017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.226026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.226371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.226733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.226742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.227088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.227394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.227403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.227834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.228160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.228169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.228428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.228799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.228808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.229159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.229524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.229535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.229893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.230246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.230255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.230598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.230912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.230921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.231254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.231591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.231599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.231923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.232150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.232160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.232502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.232860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.232870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.233217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.233593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.233603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.233972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.234304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.234314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.234667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.235044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.235054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.235357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.235732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.235741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.672 [2024-06-07 23:29:44.236092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.236411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.672 [2024-06-07 23:29:44.236420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.672 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.236739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.237156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.237165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.237580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.237960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.237970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.238316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.238649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.238658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.239006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.239392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.239401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.239746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.239943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.239951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.240360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.240602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.240611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.240950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.241279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.241288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.241602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.241981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.241990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.242359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.242705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.242714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.243065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.243459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.243468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.243657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.243884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.243892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.244219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.244575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.244584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.244911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.245262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.245272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.245614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.245972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.245981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.246320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.246659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.246668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.247034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.247415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.247424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.247786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.247925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.247933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.248250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.248562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.248571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.248917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.249297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.249307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.249654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.250000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.250009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.250332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.250587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.250596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.250961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.251329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.251338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.251688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.252050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.252059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.252232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.252581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.252591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.252899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.253281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.253291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.253635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.254007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.254016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.254342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.254706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.254715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.255081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.255414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.255423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.255757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.256111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.673 [2024-06-07 23:29:44.256120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.673 qpair failed and we were unable to recover it. 00:33:21.673 [2024-06-07 23:29:44.256491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.256874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.256883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.257141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.257492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.257501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.257825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.258079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.258088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.258436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.258734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.258743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.259120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.259530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.259539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.259904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.260247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.260257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.260671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.261006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.261015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.261502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.261821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.261834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.262198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.262572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.262582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.262886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.263262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.263271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.263619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.263978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.263987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.264366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.264732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.264755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.265119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.265456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.265467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.265814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.266144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.266153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.266465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.266835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.266843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.267200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.267573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.267582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.267947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.268301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.268310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.268685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.269028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.269037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.269438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.269781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.269790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.270136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.270486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.270495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.270820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.271178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.271186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.271538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.271897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.271906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.272258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.272600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.272608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.272994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.273325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.273335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.273665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.274020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.274029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.274373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.274737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.274746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.275064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.275428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.275438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.275775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.276109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.276118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.276422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.276768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.674 [2024-06-07 23:29:44.276777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.674 qpair failed and we were unable to recover it. 00:33:21.674 [2024-06-07 23:29:44.277141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.277493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.277502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.277841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.278197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.278206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.278554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.278923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.278931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.279186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.279526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.279536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.279778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.280116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.280125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.280469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.280803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.280811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.281194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.281545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.281554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.281920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.282291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.282300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.282699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.283060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.283068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.283422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.283779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.283788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.284158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.284490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.284499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.284826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.285195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.285203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.285536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.285892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.285901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.286249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.286603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.286611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.286977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.287319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.287329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.287678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.287879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.287890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.288249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.288586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.288595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.288974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.289345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.289355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.289723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.290093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.290101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.290438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.290805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.290814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.291162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.291425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.291435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.291762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.292095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.292105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.292447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.292800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.292810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.293238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.293575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.293585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.293937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.294163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.294173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.294528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.294780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.294790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.295155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.295506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.295518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.295869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.296235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.296248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.296577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.296849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.675 [2024-06-07 23:29:44.296859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.675 qpair failed and we were unable to recover it. 00:33:21.675 [2024-06-07 23:29:44.297107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.297416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.297426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.297805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.298106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.298115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.298331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.298641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.298651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.299001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.299355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.299365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.299738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.300115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.300127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.300535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.300915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.300924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.301249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.301538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.301548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.301894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.302231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.302240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.302600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.302966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.302975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.303206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.303587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.303596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.303929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.304284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.304293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.304645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.304889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.304898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.305253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.305656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.305664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.306001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.306348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.306357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.306709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.307050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.307062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.307233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.307562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.307572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.307940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.308272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.308281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.308637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.308996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.309005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.309252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.309596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.309605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.309945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.310157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.310167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.310526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.310872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.310880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.311224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.311599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.311608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.311932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.312304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.312313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.312663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.313039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.313048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.676 qpair failed and we were unable to recover it. 00:33:21.676 [2024-06-07 23:29:44.313391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.313775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.676 [2024-06-07 23:29:44.313784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.313877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.314201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.314210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.314604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.314942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.314951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.315402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.315756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.315765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.316132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.316402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.316411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.316608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.316834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.316843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.317234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.317629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.317638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.317969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.318342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.318352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.318694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.319037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.319046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.319397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.319748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.319757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.320100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.320442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.320452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.320814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.321122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.321131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.321409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.321755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.321764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.322098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.322221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.322229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.322568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.322904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.322913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.323263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.323608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.323617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.323973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.324351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.324360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.324718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.325062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.325071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.325408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.325780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.325789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.326141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.326354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.326363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.326714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.327050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.327059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.327389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.327751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.327760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.328118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.328355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.328364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.328707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.329082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.329092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.329448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.329659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.329668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.330012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.330262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.330271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.330616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.330817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.330827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.331044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.331451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.331462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.331813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.332909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.332929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.677 qpair failed and we were unable to recover it. 00:33:21.677 [2024-06-07 23:29:44.333290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.677 [2024-06-07 23:29:44.333645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.678 [2024-06-07 23:29:44.333654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.678 qpair failed and we were unable to recover it. 00:33:21.678 [2024-06-07 23:29:44.334029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.678 [2024-06-07 23:29:44.334255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.678 [2024-06-07 23:29:44.334266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.678 qpair failed and we were unable to recover it. 00:33:21.678 [2024-06-07 23:29:44.334628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.334892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.334903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.335256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.335624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.335634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.336056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.337216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.337239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.337765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.338102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.338111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.338333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.338571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.338580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.338926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.339298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.339307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.339721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.340102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.340111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.340460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.340805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.340814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.341184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.341472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.341481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.341831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.342176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.342185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.342573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.342903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.342915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.343266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.343609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.343618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.343959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.344162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.344173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.344515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.344941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.344950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.345287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.345611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.345620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.345861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.346208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.346217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.346615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.346969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.346979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.347325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.347703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.347712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.348079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.348406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.348416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.348838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.349180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.947 [2024-06-07 23:29:44.349189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.947 qpair failed and we were unable to recover it. 00:33:21.947 [2024-06-07 23:29:44.349582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.349924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.349933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.350284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.350641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.350650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.350985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.351335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.351352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.351700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.352039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.352048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.352411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.352791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.352800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.353089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.353455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.353464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.353831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.354201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.354211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.354559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.354898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.354907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.355258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.355558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.355567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.355787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.356127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.356136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.356572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.356897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.356906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.357258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.357589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.357598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.357924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.358323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.358336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.358654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.359032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.359041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.359368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.359706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.359715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.360038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.360414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.360424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.360633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.361005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.361014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.361218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.361473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.361482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.361861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.362252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.362263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.362610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.362980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.948 [2024-06-07 23:29:44.362989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.948 qpair failed and we were unable to recover it. 00:33:21.948 [2024-06-07 23:29:44.363303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.363685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.363694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.364038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.364376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.364386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.364759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.365115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.365123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.365441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.365747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.365759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.366105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.366416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.366425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.366647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.366934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.366943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.367329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.367593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.367602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.367931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.368297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.368307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.368681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.369017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.369026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.369346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.369691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.369700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.369948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.370239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.370261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.370549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.370882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.370891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.371222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.371589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.371599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.371951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.372287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.372296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.372550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.372919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.372929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.373220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.373543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.373552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.373880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.374042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.374053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.374372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.374720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.374729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.374951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.375304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.375314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.375683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.376005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.376014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.376394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.376763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.376774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.949 [2024-06-07 23:29:44.377114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.377455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.949 [2024-06-07 23:29:44.377467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.949 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.377828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.378160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.378169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.378497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.378825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.378834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.379164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.379481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.379491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.379831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.380192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.380201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.380532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.380857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.380867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.381195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.381512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.381522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.381867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.382205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.382214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.382563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.382900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.382909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.383287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.383608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.383617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.383968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.384314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.384323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.384669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.384992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.385008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.385357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.385584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.385593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.385963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.386299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.386309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.386653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.386985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.386994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.387325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.387559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.387568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.387906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.388270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.388279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.388657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.388993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.389002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.389352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.389610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.389619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.389952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.390282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.390292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.390626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.390909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.390918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.950 qpair failed and we were unable to recover it. 00:33:21.950 [2024-06-07 23:29:44.391283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.950 [2024-06-07 23:29:44.391623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.391632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.391984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.392234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.392247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.392603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.392935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.392944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.393293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.393601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.393609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.393950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.394298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.394308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.394649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.394986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.394995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.395214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.395598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.395607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.395938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.396258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.396267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.396657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.397001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.397010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.397349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.397715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.397724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.398056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.398415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.398426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.398810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.399151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.399159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.399509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.399820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.399829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.400164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.400477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.400486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.400821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.401153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.401163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.401458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.401797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.401807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.402143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.402478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.402487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.402816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.403183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.403192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.403554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.403866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.403875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.404205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.404453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.404462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.404854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.405193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.951 [2024-06-07 23:29:44.405202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.951 qpair failed and we were unable to recover it. 00:33:21.951 [2024-06-07 23:29:44.405520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.405851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.405860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.405999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.406371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.406380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.406676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.406965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.406975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.407315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.407660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.407670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.408037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.408348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.408358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.408780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.409118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.409127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.409367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.409720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.409729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.410111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.410442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.410452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.410792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.411128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.411137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.411462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.411800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.411812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.412138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.412444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.412453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.412716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.413051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.413060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.413399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.413748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.413757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.414001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.414339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.414349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.414685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.415004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.415013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.415388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.415736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.415744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.416089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.416426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.416435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.416766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.417077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.417086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.417426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.417722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.417731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.418057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.418390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.418401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.418748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.419039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.419049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.952 qpair failed and we were unable to recover it. 00:33:21.952 [2024-06-07 23:29:44.419387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.419724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.952 [2024-06-07 23:29:44.419733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.420087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.420391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.420401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.420725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.421050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.421059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.421395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.421492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.421502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.421919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.422223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.422232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.422591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.422818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.422827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.423180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.423515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.423533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.423840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.424163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.424172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.424537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.424893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.424902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.425214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.425585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.425594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.425915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.426206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.426215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.426572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.426925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.426934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.427251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.427515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.953 [2024-06-07 23:29:44.427524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.953 qpair failed and we were unable to recover it. 00:33:21.953 [2024-06-07 23:29:44.427858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.428180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.428189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.428536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.428883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.428893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.429234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.429512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.429521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.429881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.430216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.430225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.430578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.430919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.430928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.431317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.431659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.431668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.431994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.432158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.432167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.432401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.432805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.432814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.433145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.433477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.433486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.433829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.434192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.434201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.434480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.434840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.434849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.435216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.435635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.435644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.435982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.436295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.436304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.436647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.437010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.437019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.437349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.437728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.437737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.438085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.438445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.438455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.438806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.439162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.439171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.439501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.439846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.439856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.440200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.440582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.440591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.440954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.441202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.954 [2024-06-07 23:29:44.441212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.954 qpair failed and we were unable to recover it. 00:33:21.954 [2024-06-07 23:29:44.441556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.441916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.441926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.442208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.442620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.442630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.442970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.443291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.443300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.443642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.443970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.443979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.444327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.444600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.444609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.445012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.445262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.445272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.445640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.446006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.446015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.446249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.446610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.446618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.446943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.447151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.447161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.447498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.447870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.447879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.448246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.448611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.448620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.448950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.449114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.449123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.449499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.449872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.449881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.450120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.450378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.450387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.450752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.451109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.451118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.451457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.451808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.451817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.452136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.452473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.452485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.452805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.453134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.453144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.453472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.453804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.453813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.454178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.454528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.955 [2024-06-07 23:29:44.454537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.955 qpair failed and we were unable to recover it. 00:33:21.955 [2024-06-07 23:29:44.454868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.455078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.455088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.455316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.455655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.455664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.456025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.456366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.456375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.456737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.456917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.456926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.457122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.457535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.457544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.457910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.458246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.458256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.458589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.458937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.458946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.459274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.459631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.459641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.459883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.460106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.460115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.460484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.460799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.460808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.461154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.461573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.461582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.461775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.462019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.462028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.462361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.462694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.462703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.463071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.463443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.463453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.463824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.464155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.464164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.464315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.464684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.464693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.465075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.465412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.465422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.465793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.466152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.466162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.466500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.466854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.466863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.467211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.467583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.467592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.467920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.468159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.956 [2024-06-07 23:29:44.468169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.956 qpair failed and we were unable to recover it. 00:33:21.956 [2024-06-07 23:29:44.468572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.468898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.468907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.469295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.469619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.469628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.470020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.470375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.470384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.470786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.471096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.471104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.471501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.471865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.471875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.472198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.472376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.472385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.472720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.473099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.473108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.473424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.473674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.473683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.474015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.474374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.474383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.474754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.475101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.475110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.475444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.475741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.475750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.476105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.476468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.476478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.476842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.477197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.477206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.477554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.477895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.477904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.478276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.478638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.478647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.478993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.479331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.479340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.479666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.480022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.480031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.480362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.480747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.480757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.481101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.481442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.481451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.481777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.482132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.957 [2024-06-07 23:29:44.482140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.957 qpair failed and we were unable to recover it. 00:33:21.957 [2024-06-07 23:29:44.482552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.482875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.482885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.483232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.483597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.483607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.483976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.484306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.484315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.484681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.485022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.485031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.485367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.485737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.485746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.486096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.486450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.486459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.486719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.486976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.486987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.487358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.487692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.487701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.488038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.488374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.488383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.488720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.489058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.489068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.489301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.489641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.489650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.489973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.490327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.490336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.490682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.491033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.491042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.491400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.491781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.491791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.492039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.492357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.492367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.492722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.493100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.493109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.493438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.493803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.493812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.494163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.494501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-06-07 23:29:44.494510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.958 qpair failed and we were unable to recover it. 00:33:21.958 [2024-06-07 23:29:44.494828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.495069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.495077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.495401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.495589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.495600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.495976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.496239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.496253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.496607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.496979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.496987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.497314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.497651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.497660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.497867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.498227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.498236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.498564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.498941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.498950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.499315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.499692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.499701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.500048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.500391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.500401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.500753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.501112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.501120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.501458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.501833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.501842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.502204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.502546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.502556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.502890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.503228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.503237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.503591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.503836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.503845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.504058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.504456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.504467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.959 qpair failed and we were unable to recover it. 00:33:21.959 [2024-06-07 23:29:44.504793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.959 [2024-06-07 23:29:44.505154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.505163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.505488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.505851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.505859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.506113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.506467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.506476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.506846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.507106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.507116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.507449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.507660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.507669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.507978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.508341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.508349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.508698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.509038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.509047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.509403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.509781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.509789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.510138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.510496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.510505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.510833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.511211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.511221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.511570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.511938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.511948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.512320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.512680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.512689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.513048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.513392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.513402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.513755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.514138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.514147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.514494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.514860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.514870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.515206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.515491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.960 [2024-06-07 23:29:44.515500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.960 qpair failed and we were unable to recover it. 00:33:21.960 [2024-06-07 23:29:44.515861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.516207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.516216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.516583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.516958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.516967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.517280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.517610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.517619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.517952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.518330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.518339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.518700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.519066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.519075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.519442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.519736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.519745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.520186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.520545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.520554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.520921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.521261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.521270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.521645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.521996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.522007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.522321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.522622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.522631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.522988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.523292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.523301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.523653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.524014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.524023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.524371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.524734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.524743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.525106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.525455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.525465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.525787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.526119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.526128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.526457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.527586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.527607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.527927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.528312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.528322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.528665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.529027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.529036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.529360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.529775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.529788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.530132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.530439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.961 [2024-06-07 23:29:44.530448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.961 qpair failed and we were unable to recover it. 00:33:21.961 [2024-06-07 23:29:44.530833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.531183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.531191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.531527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.531889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.531898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.532223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.532586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.532595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.532943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.533323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.533333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.533695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.534037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.534047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.534372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.534745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.534754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.535071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.535435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.535445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.535811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.535982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.535995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.536326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.536652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.536661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.536941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.537275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.537284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.537624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.537964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.537973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.538338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.538744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.538752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.539100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.539441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.539451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.539786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.540112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.540122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.540486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.540811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.540821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.541193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.541525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.541535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.541869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.542173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.542182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.542525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.542876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.542885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.543212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.543581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.543591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.543936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.544095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.544105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.544307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.544615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.544624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.962 qpair failed and we were unable to recover it. 00:33:21.962 [2024-06-07 23:29:44.544928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.545294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.962 [2024-06-07 23:29:44.545303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.545680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.546051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.546059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.546395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.546744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.546754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.547167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.547415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.547425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.547771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.548110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.548118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.548382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.548730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.548740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.549108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.549221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.549230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.549530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.549914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.549923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.550276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.550635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.550644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.550991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.551353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.551363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.551699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.552032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.552041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.552392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.552748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.552757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.553043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.553366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.553376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.553752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.554117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.554126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.554466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.554831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.554840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.555187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.555511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.555521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.555761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.556116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.556125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.556473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.556828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.556837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.557194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.557560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.557572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.557926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.558333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.558342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.558676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.559076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.559085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.559395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.559742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.963 [2024-06-07 23:29:44.559751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.963 qpair failed and we were unable to recover it. 00:33:21.963 [2024-06-07 23:29:44.560126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.560518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.560528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.560885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.561270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.561279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.561655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.561982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.561991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.562339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.562646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.562655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.562987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.563364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.563373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.563844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.564201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.564210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.564590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.564958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.564970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.565327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.565598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.565607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.565932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.566249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.566259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.566720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.567073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.567081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.567317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.567685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.567694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.567933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.568157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.568166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.568413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.568785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.568794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.569040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.569255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.569265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.569613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.569982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.569990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.570351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.570526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.570536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.570872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.571099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.571108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.571434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.571805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.571815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.572164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.572505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.572514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.964 qpair failed and we were unable to recover it. 00:33:21.964 [2024-06-07 23:29:44.572770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.964 [2024-06-07 23:29:44.573139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.573149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.573485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.573857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.573866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.574231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.574589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.574599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.574905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.575144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.575153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.575499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.575844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.575854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.576114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.576417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.576426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.576757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.577109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.577118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.577332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.577691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.577700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.578034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.578240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.578259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.578584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.578991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.579000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.579204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.579584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.579594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.579954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.580366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.580375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.580617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.580961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.580969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.581297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.581566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.581575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.581941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.582308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.582317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.582656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.582967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.582977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.583188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.583386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.583396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.583734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.584123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.584133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.584323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.584677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.584687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.584901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.585173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.585182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.585531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.585913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.585923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.586298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.586654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.586663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.587119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.587351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.587361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.587739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.588076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.588084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.588430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.588791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.965 [2024-06-07 23:29:44.588800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.965 qpair failed and we were unable to recover it. 00:33:21.965 [2024-06-07 23:29:44.589051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.589394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.589403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.589690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.590023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.590033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.590421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.590766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.590775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.591135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.591490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.591499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.591826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.592165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.592175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.593166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.593538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.593550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.593912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.594082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.594093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.594423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.594709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.594718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.595095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.595435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.595445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.595787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.596164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.596173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.596579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.596935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.596944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.597278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.597566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.597575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.597903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.598266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.598275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.598611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.598982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.598994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.599361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.599706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.599715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.600062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.600382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.600391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.600735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.601028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.601038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.601367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.601722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.601731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.602086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.602401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.602411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.602754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.603097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.603106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.603451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.603658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.603668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.604098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.604424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.604433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.604744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.605108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.605116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.605482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.605813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.605823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.966 qpair failed and we were unable to recover it. 00:33:21.966 [2024-06-07 23:29:44.606186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.966 [2024-06-07 23:29:44.606528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.606537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.606917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.607272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.607282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.607608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.607979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.607988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.608313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.608693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.608701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.609067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.609430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.609439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.609767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.610136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.610145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.610497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.610854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.610863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.611214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.611592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.611602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.611961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.612334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.612343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.612585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.612958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.612968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.613299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.613628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.613637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.613980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.614203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.614212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.614616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.614951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.614961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.615310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.615679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.615688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.615981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.616210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.616219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.616562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.616920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.616929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.617261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.617579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.617589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.617924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.618185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.618194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.618543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.618881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.618891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:21.967 [2024-06-07 23:29:44.619132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.619444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-06-07 23:29:44.619454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:21.967 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.619790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.620100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.620111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.620488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.620583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.620594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.621011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.621349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.621359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.621733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.622062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.622072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.622397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.622757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.622767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.623114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.623452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.623462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.623805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.624179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.624189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.244 qpair failed and we were unable to recover it. 00:33:22.244 [2024-06-07 23:29:44.624522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.244 [2024-06-07 23:29:44.624877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.624887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.625265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.625513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.625522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.625891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.626182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.626192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.626524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.626860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.626869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.627212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.627522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.627533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.627911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.628250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.628260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.628639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.628842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.628852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.629215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.629489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.629499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.629839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.630167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.630176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.630522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.630862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.630871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.631213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.631514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.631524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.631665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.631998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.632008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.632274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.632482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.632493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.632833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.633170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.633182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.633572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.633907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.633917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.634269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.634524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.634533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.634857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.635167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.635178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.635481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.635848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.635858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.636245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.636567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.636576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.636897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.637223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.637233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.637618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.637942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.637951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.638301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.638676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.638685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.639014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.639355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.639368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.639745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.640077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.640086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.640435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.640746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.640755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.641122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.641491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.245 [2024-06-07 23:29:44.641500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.245 qpair failed and we were unable to recover it. 00:33:22.245 [2024-06-07 23:29:44.641860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.642161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.642170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.642500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.642827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.642836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.643156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.643507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.643517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.643725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.644067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.644076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.644323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.644596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.644604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.644959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.645282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.645291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.645643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.645867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.645876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.646198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.646464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.646474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.646798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.647188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.647196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.647641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.647971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.647981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.648230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.648636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.648645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.649015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.649350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.649360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.649710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.650021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.650030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.650402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.650700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.650709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.651012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.651280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.651290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.651568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.651906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.651914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.652331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.652694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.652703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.653043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.653369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.653379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.653727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.653972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.653982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.654294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.654611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.654620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.654865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.655235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.655248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.655642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.655984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.655993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.656352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.656779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.656788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.657107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.657383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.657393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.657638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.657934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.657943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.658142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.658629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.658638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.658868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.659091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.659100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.659510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.659803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.659812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.246 [2024-06-07 23:29:44.660061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.660462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.246 [2024-06-07 23:29:44.660472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.246 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.660878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.661179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.661189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.661362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.661665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.661674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.661974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.662319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.662328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.662707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.663017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.663025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.663389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.663750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.663759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.664082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.664430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.664440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.664827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.665163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.665172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.665574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.665907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.665916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.666232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.666493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.666502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.666843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.667151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.667162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.667511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.667803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.667812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.668135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.668480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.668489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.668855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.669184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.669194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.669461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.669830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.669840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.670170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.670543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.670552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.670926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.671304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.671313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.671469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.671779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.671788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.672034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.672358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.672368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.672736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.673063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.673073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.673429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.673806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.673817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.674146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.674441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.674451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.674687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.674977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.674986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.675239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.675736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.675745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.675996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.676323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.676332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.676668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.676974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.676983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.677334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.677643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.677652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.678043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.678350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.678359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.678737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.678970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.678979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.247 qpair failed and we were unable to recover it. 00:33:22.247 [2024-06-07 23:29:44.679355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.247 [2024-06-07 23:29:44.679701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.679710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.679901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.680156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.680165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.680462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.680697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.680705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.681071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.681453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.681463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.681805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.682146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.682155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.682508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.682834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.682844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.683203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.683607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.683616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.683933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.684272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.684281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.684725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.685079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.685088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.685461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.685839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.685848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.686223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.686467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.686476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.686791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.687125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.687134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.687471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.687854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.687864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.688113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.688250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.688260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.688603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.688975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.688984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.689305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.689643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.689652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.689981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.690295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.690305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.690641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.691009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.691018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.691363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.691625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.691634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.692002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.692336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.692346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.692599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.692984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.692993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.693127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.693532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.693541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.693910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.694134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.694144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.694488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.694817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.694827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.695172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.695504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.695513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.695836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.696153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.696162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.696458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.696782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.696791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.697154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.697411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-06-07 23:29:44.697421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.248 qpair failed and we were unable to recover it. 00:33:22.248 [2024-06-07 23:29:44.697741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.698072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.698081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.698325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.698690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.698698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.698904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.699258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.699268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.699701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.699954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.699964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.700306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.700648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.700657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.700827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.701120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.701128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.701384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.701697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.701706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.702055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.702393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.702403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.702606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.702968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.702976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.703351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.703657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.703667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.703997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.704218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.704228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.704588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.704885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.704894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.705269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.705627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.705636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.705965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.706354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.706363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.706694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.707000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.707011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.707361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.707701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.707710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.708016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.708240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.708253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.708583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.708886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.708894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.709099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.709345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.709355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.709731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.709998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.710007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.710331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.710710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.710718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.711070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.711399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.711410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.711754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.712071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.712080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.712460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.712797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.712806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.249 qpair failed and we were unable to recover it. 00:33:22.249 [2024-06-07 23:29:44.713086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-06-07 23:29:44.713311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.713320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.713660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.714020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.714030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.714381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.714760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.714770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.715147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.715520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.715529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.715892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.716200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.716209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.716435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.716808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.716818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.717135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.717545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.717554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.717895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.718267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.718276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.718573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.718902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.718911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.719194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.719562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.719571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.719938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.720264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.720273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.720622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.720848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.720857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.721250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.721612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.721622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.721965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.722295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.722304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.722701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.723014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.723023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.723264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.723607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.723616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.723939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.724309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.724318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.724698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.725034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.725043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.725403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.725752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.725760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.726077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.726355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.726365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.726693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.727077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.727087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.727358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.727707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.727717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.727975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.728295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.728304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.728670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.728985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.728993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.729317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.729674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.729682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.729999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.730208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.730218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.730487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.730824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.730833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.731152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.731351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.731361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.731741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.732069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.732078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.250 qpair failed and we were unable to recover it. 00:33:22.250 [2024-06-07 23:29:44.732427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-06-07 23:29:44.732768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.732777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.733107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.733413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.733422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.733763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.734118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.734127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.734587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.734913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.734921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.735305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.735517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.735527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.735898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.736236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.736250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.736578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.736889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.736897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.737262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.737644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.737653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.737994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.738301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.738311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.738605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.738905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.738914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.739278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.739647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.739656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.739898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.740188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.740196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.740603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.740912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.740923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.741166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.741528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.741536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.741875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.742241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.742258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.742534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.742876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.742885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.743201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.743457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.743466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.743786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.744055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.744064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.744418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.744775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.744784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.745109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.745405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.745414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.745664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.746041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.746051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.746400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.746763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.746772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.747092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.747417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.747427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.747771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.748096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.748104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.748447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.748825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.748835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.749161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.749405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.749414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.749776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.750046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.750055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.750319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.750686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.750695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.251 qpair failed and we were unable to recover it. 00:33:22.251 [2024-06-07 23:29:44.751010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.751380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.251 [2024-06-07 23:29:44.751390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.751621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.751853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.751861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.752147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.752504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.752513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.752761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.753098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.753108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.753465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.753833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.753842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.754194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.754391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.754400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.754775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.755148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.755157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.755512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.755759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.755768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.756117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.756443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.756452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.756807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.757143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.757152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.757520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.757841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.757850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.758195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.758533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.758542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.758916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.759177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.759187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.759439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.759811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.759821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.760160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.760456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.760465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.760780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.761118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.761126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.761452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.761778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.761787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.762136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.762301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.762310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.762625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.762981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.762990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.763347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.763653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.763661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.764021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.764232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.764245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.764582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.764954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.764963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.765293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.765617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.765626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.765950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.766163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.766174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.766522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.766740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.766750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.767132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.767346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.767356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.767720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.768100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.768109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.768480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.768853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.768862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.769233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.769567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.769576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.252 qpair failed and we were unable to recover it. 00:33:22.252 [2024-06-07 23:29:44.769936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.770300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.252 [2024-06-07 23:29:44.770309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.770558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.770934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.770943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.771131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.771479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.771489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.771802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.772118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.772128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.772472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.772681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.772690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.772955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.773258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.773267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.773615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.773961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.773972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.774297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.774641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.774651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.775011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.775218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.775228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.775479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.775819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.775828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.776203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.776583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.776593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.776849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.777229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.777239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.777603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.777983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.777992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.778372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.778704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.778713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.778913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.779285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.779294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.779525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.779868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.779876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.780202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.780547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.780560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.780907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.781279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.781288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.781642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.782000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.782009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.782240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.782564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.782573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.782904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.783256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.783266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.783632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.783993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.784002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.784327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.784592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.784600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.784938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.785313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.785323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.785558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.785898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.785907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.786229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.786565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.786575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.786923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.787302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.787312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.787565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.787933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.787942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.253 [2024-06-07 23:29:44.788277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.788612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.253 [2024-06-07 23:29:44.788622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.253 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.788978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.789334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.789343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.789636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.789975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.789984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.790316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.790625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.790634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.790992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.791324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.791333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.791585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.791967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.791976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.792268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.792623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.792631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.793004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.793277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.793286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.793576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.793923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.793932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.794358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.794563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.794573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.794938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.795294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.795304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.795625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.795996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.796005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.796336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.796652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.796661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.796998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.797315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.797324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.797701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.797970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.797979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.798328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.798623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.798632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.798956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.799328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.799337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.799620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.799970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.799978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.800302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.800516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.800525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.800943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.801274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.801284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.801693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.802050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.802059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.802394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.802743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.802751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.803125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.803532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.803542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.803885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.804274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.804284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.254 [2024-06-07 23:29:44.804658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.805027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.254 [2024-06-07 23:29:44.805036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.254 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.805375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.805624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.805634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.806001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.806237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.806251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.806620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.806950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.806958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.807373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.807782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.807792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.808150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.808496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.808507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.808814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.809168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.809177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.809556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.809817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.809826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.810210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.810561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.810571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.810937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.811295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.811306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.811583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.811953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.811963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.812252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.812595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.812605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.812935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.813262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.813272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.813612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.813943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.813952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.814309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.814645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.814654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.814982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.815256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.815268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.815611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.815935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.815943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.816163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.816470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.816479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.816801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.817134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.817144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.817534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.817875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.817885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.818274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.818645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.818653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.819000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.819350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.819359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.819704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.820058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.820067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.820429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.820788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.820796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.821129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.821335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.821344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.821650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.821972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.821981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.822346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.822706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.822715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.822967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.823204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.823213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.823565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.823946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.823955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.255 qpair failed and we were unable to recover it. 00:33:22.255 [2024-06-07 23:29:44.824195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.255 [2024-06-07 23:29:44.824429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.824438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.824817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.825157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.825166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.825504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.825819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.825828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.826134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.826575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.826585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.826700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.827016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.827025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.827349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.827722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.827731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.828064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.828376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.828386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.828617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.828997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.829006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.829223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.829572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.829582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.829841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.830102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.830110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.830294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.830659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.830668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.831033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.831401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.831411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.831819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.832194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.832203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.832468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.832844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.832853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.833210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.833575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.833584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.833807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.834162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.834171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.834519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.834766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.834776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.835121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.835505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.835515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.835704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.836076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.836084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.836498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.836740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.836749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.837138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.837481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.837491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.837751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.838090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.838098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.838384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.838757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.838765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.839102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.839386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.839396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.839841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.840175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.840184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.840546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.840916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.840925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.841284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.841612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.841621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.841953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.842249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.842258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.842604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.842934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.256 [2024-06-07 23:29:44.842944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.256 qpair failed and we were unable to recover it. 00:33:22.256 [2024-06-07 23:29:44.843256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.843603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.843612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.843979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.844326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.844336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.844673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.845034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.845043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.845366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.845710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.845719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.846037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.846376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.846386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.846750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.847056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.847065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.847400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.847644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.847653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.847982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.848343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.848353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.848613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.848849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.848861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.849209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.849468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.849478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.849726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.849962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.849972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.850196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.850551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.850560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.850934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.851262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.851271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.851604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.851768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.851778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.852129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.852463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.852473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.852791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.853150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.853159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.853488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.853839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.853848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.854204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.854517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.854526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.854742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.855080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.855090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.855434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.855792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.855801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.856161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.856505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.856514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.856875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.857205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.857213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.857587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.857874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.857883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.858257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.858612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.858621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.858846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.859192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.859201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.859587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.859926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.859935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.860306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.860674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.257 [2024-06-07 23:29:44.860683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.257 qpair failed and we were unable to recover it. 00:33:22.257 [2024-06-07 23:29:44.861022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.861355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.861364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.861721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.862029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.862038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.862352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.862695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.862703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.863033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.863379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.863388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.863747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.864071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.864080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.864386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.864708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.864717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.864936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.865206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.865215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.865560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.865784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.865793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.866134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.866450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.866460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.866782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.867120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.867129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.867480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.867731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.867740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.868110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.868400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.868409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.868771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.868994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.869004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.869321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.869700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.869709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.869948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.870241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.870257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.870600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.870876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.870885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.871214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.871586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.871595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.872045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.872436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.872446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.872801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.873172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.873181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.873522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.873895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.873904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.874274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.874573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.874582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.874941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.875316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.875326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.875626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.875854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.875863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.876204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.876529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.876538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.258 [2024-06-07 23:29:44.876862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.877217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.258 [2024-06-07 23:29:44.877226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.258 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.877494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.877841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.877850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.878177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.878512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.878521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.878896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.879225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.879233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.879693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.880047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.880056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.880403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.880788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.880797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.881118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.881414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.881423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.881750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.882094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.882102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.882432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.882682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.882692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.883028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.883389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.883399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.883759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.884096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.884105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.884526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.884864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.884873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.885098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.885451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.885461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.885804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.886167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.886176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.886405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.886773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.886782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.887026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.887366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.887375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.887741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.887964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.887973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.888321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.888703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.888713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.888902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.889231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.889240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.889638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.890010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.890019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.890340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.890677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.890686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.891010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.891372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.891382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.891730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.892062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.892070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.892435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.892754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.892763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.893109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.893445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.893455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.893754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.894112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.894121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.894430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.894779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.894788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.895156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.895505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.895514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.895860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.896226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.896235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.259 qpair failed and we were unable to recover it. 00:33:22.259 [2024-06-07 23:29:44.896612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.896975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.259 [2024-06-07 23:29:44.896984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.897348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.897731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.897741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.898069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.898462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.898471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.898671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.899062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.899071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.899427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.899780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.899789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.900154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.900466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.900476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.900803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.901171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.901180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.901495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.901865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.901873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.902237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.902608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.902618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.902967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.903306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.903315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.903739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.904164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.904173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.904431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.904773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.904784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.905125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.905461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.905471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.905805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.906160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.906168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.906490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.906847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.906856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.907224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.907590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.907600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.907925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.908262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.908272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.908617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.908948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.908957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.260 [2024-06-07 23:29:44.909342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.909690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.260 [2024-06-07 23:29:44.909699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.260 qpair failed and we were unable to recover it. 00:33:22.529 [2024-06-07 23:29:44.910048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.910404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.910414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.529 qpair failed and we were unable to recover it. 00:33:22.529 [2024-06-07 23:29:44.910754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.911129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.911138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.529 qpair failed and we were unable to recover it. 00:33:22.529 [2024-06-07 23:29:44.911502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.911853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.911862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.529 qpair failed and we were unable to recover it. 00:33:22.529 [2024-06-07 23:29:44.912202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.912570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.529 [2024-06-07 23:29:44.912579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.529 qpair failed and we were unable to recover it. 00:33:22.529 [2024-06-07 23:29:44.912928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.913250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.913260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.913597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.913943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.913952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.914275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.914634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.914643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.915007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.915351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.915360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.915714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.916064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.916074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.916276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.916655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.916664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.917025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.917398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.917407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.917772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.918127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.918138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.918459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.918819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.918828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.919197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.919447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.919459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.919807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.920181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.920190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.920579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.920937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.920947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.921297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.921636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.921645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.922003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.922393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.922402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.922690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.923042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.923051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.923385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.923712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.923720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.924078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.924419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.924429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.924757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.925117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.925127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.925369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.925758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.925767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.926101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.926473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.926483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.926890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.927232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.927246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.927622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.927973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.927982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.928328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.928565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.928574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.928903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.929282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.929292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.929655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.930033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.930042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.930216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.930539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.930548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.930918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.931198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.931207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.931578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.931907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.931916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.530 qpair failed and we were unable to recover it. 00:33:22.530 [2024-06-07 23:29:44.932250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.932593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.530 [2024-06-07 23:29:44.932602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.932951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.933288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.933297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.933656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.933941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.933950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.934287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.934620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.934629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.934975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.935329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.935339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.935705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.936031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.936040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.936391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.936755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.936763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.937094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.937394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.937403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.937764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.938138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.938147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.938504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.938836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.938844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.939175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.939556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.939566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.939945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.940352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.940362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.940712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.941039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.941048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.941375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.941747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.941755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.942084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.942424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.942434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.942768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.943118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.943127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.943517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.943878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.943886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.944250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.944597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.944605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.944956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.945286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.945295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.945582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.945930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.945939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.946307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.946651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.946660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.947023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.947355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.947364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.947691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.948050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.948059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.948394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.948743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.948752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.949115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.949457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.949466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.949804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.950157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.950166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.950488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.950711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.950721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.951083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.951340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.951350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.531 qpair failed and we were unable to recover it. 00:33:22.531 [2024-06-07 23:29:44.951705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.531 [2024-06-07 23:29:44.952046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.952055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.952419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.952801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.952810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.953135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.953464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.953475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.953832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.954164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.954172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.954561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.954895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.954903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.955253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.955633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.955642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.955938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.956307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.956316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.956686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.957040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.957049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.957398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.957747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.957756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.958118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.958462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.958471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.958807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.959167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.959176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.959618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.959828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.959837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.960182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.960397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.960410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.960703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.961081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.961090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.961438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.961788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.961798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.962162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.962535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.962544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.962835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.963159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.963167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.963506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.963877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.963885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.964246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.964599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.964608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.965061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.965384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.965393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.965681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.966042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.966051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.966397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.966771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.966780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.967093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.967454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.967464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.967827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.968197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.968206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.968532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.968888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.968897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.969247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.969598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.969606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.969973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.970472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.970509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.970881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.971236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.532 [2024-06-07 23:29:44.971252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.532 qpair failed and we were unable to recover it. 00:33:22.532 [2024-06-07 23:29:44.971618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.971958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.971967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.972454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.972682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.972696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.973051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.973385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.973395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.973734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.974071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.974080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.974430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.974802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.974811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.975144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.975523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.975534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.975883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.976216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.976225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.976565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.976933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.976942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.977291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.977528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.977538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.977892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.978185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.978193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.978535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.978896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.978905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.979251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.979623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.979632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.979962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.980310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.980320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.980564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.980871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.980880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.981227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.981596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.981607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.981945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.982279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.982289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.982635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.982900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.982909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.983279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.983601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.983610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.983859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.984183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.984192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.984566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.984938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.984947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.985268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.985622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.985631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.985960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.986192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.986201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.986568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.986922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.986931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.987262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.987618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.987627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.987884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.988259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.988269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.988593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.988862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.988871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.533 qpair failed and we were unable to recover it. 00:33:22.533 [2024-06-07 23:29:44.989198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.989569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.533 [2024-06-07 23:29:44.989578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.989902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.990281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.990291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.990636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.990892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.990902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.991272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.991614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.991623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.991862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.992154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.992163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.992366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.992738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.992747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.992952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.993322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.993331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.993702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.994036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.994045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.994396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.994747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.994757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.995086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.995404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.995417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.995760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.996098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.996106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.996475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.996743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.996752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.997102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.997446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.997455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.997831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.998185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.998194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.998563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.998744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.998754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.999150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.999511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:44.999520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:44.999890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.000223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.000233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.000613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.000994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.001004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.001258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.001578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.001588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.001906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.002277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.002287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.002701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.003074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.003082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.003377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.003751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.003761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.004085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.004467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.004477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.004843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.005216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.005224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.534 qpair failed and we were unable to recover it. 00:33:22.534 [2024-06-07 23:29:45.005554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.534 [2024-06-07 23:29:45.005908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.005918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.006295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.006620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.006628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.006935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.007270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.007280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.007612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.007953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.007961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.008326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.008681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.008690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.009025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.009420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.009430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.009772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.010111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.010119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.010368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.010760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.010769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.011117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.011490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.011499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.011862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.012239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.012253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.012504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.012865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.012875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.013200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.013559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.013569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.013831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.014205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.014214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.014554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.014920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.014929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.015273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.015673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.015682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.016024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.016339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.016349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.016712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.016973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.016981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.017357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.017666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.017675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.018030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.018361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.018370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.018702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.019086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.019096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.019400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.019768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.019777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.020112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.020457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.020466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.020819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.021185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.021195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.021536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.021900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.021909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.022255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.022617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.022627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.023006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.023346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.023356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.023646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.024003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.024012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.535 [2024-06-07 23:29:45.024333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.024715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.535 [2024-06-07 23:29:45.024723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.535 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.025048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.025403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.025413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.025756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.026134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.026143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.026505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.026842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.026851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.027198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.027625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.027635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.027983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.028209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.028219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.028552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.028715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.028726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.029039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.029374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.029384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.029736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.030092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.030101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.030448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.030848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.030859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.031196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.031432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.031442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.031807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.032142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.032151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.032506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.032778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.032787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.033108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.033480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.033490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.033816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.034199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.034209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.034565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.034938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.034948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.035283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.035623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.035632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.035955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.036346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.036355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.036689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.037050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.037060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.037384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.037732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.037743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.038106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.038445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.038455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.038811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.039154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.039164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.039454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.039794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.039803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.040122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.040493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.040502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.040825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.041207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.041217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.041570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.041945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.041955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.042303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.042512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.042522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.042837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.043062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.043070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.536 [2024-06-07 23:29:45.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.043660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.536 [2024-06-07 23:29:45.043669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.536 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.043973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.044335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.044345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.044655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.044982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.044991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.045221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.045613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.045623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.045971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.046259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.046268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.046641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.046980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.046988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.047377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.047731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.047741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.048098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.048442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.048451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.048824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.049151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.049161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.049450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.049783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.049792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.050166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.050490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.050500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.050745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.051080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.051089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.051471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.051819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.051828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.052089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.052453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.052463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.052797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.053162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.053171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.053486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.053865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.053874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.054222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.054552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.054562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.054885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.055272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.055282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.055538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.056044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.056055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.056280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.056641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.056651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.056985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.057354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.057363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.057694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.058045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.058053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.058418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.058789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.058799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.059160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.059495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.059504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.059831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.060197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.060206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.060530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.060902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.060911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.061252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.061633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.061642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.061861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.062264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.062275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.062611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.062903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.062912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.063355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.063700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.537 [2024-06-07 23:29:45.063709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.537 qpair failed and we were unable to recover it. 00:33:22.537 [2024-06-07 23:29:45.064058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.064425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.064434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.064783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.065128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.065137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.065548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.065890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.065899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.066249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.066592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.066601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.066963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.067297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.067307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.067688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.068033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.068043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.068436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.068797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.068806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.069060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.069282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.069291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.069673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.069926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.069935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.070376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.070723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.070732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.071115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.071450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.071459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.071802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.072118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.072128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.072495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.072790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.072801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.073168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.073474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.073483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.073628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.073858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.073868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.074341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.074596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.074606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.074975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.075267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.075277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.075633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.075885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.075895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.076110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.076449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.076458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.076826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.077019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.077027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.077386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.077746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.077755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.078080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.078436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.078446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.078666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.079087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.079096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.079434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.079770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.079779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.080147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.080450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.080459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.080809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.081172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.081180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.081538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.081895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.081905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.082311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.082628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.082637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.538 qpair failed and we were unable to recover it. 00:33:22.538 [2024-06-07 23:29:45.083000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.538 [2024-06-07 23:29:45.083345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.083355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.083728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.083943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.083952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.084339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.084556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.084566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.084904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.085115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.085124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.085443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.085819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.085828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.086123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.086517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.086527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.086876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.087210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.087218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.087458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.087827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.087836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.088172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.088407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.088416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.088782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.089156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.089165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.089518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.089894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.089902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.090239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.090587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.090597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.090903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.091261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.091271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.091607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.091967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.091976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.092339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.092702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.092710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.093073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.093460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.093470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.093724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.094036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.094045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.094362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.094576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.094586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.094933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.095293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.095303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.095634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.096014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.096024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.096317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.096526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.096536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.096914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.097289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.097298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.097624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.097913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.097922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.098252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.098609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.098618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.539 qpair failed and we were unable to recover it. 00:33:22.539 [2024-06-07 23:29:45.098945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.539 [2024-06-07 23:29:45.099283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.099293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.099547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.099770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.099779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.100127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.100468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.100479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.100926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.101279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.101288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.101644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.102002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.102011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.102347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.102674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.102683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.103036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.103392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.103402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.103762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.104135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.104144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.104583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.104910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.104919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.105233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.105615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.105624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.105941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.106279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.106288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.106632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.107004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.107017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.107348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.107692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.107701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.108048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.108308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.108317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.108676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.109008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.109017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.109431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.109768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.109777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.110215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.110555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.110564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.110920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.111268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.111277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.111600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.111946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.111955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.112280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.112624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.112633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.112961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.113275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.113284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.113675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.114035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.114044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.114409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.114747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.114756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.115141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.115455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.115464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.115822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.116095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.116103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.116464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.116720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.116729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.117118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.117315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.117326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.117710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.118041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.118050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.540 qpair failed and we were unable to recover it. 00:33:22.540 [2024-06-07 23:29:45.118413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.118766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.540 [2024-06-07 23:29:45.118776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.119068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.119402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.119411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.119745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.120118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.120126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.120459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.120833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.120841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.121207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.121543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.121552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.121896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.122263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.122273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.122604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.122912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.122920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.123248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.123503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.123512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.123859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.124233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.124248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.124587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.124925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.124934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.125232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.125598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.125608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.125974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.126326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.126335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.126662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.127015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.127024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.127366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.127604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.127613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.127980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.128310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.128320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.128668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.129046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.129056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.129419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.129750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.129759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.130101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.130418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.130429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.130679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.131014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.131023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.131400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.131717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.131726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.132089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.132435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.132444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.132821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.133200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.133210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.133548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.133744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.133752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.134100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.134461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.134471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.134851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.135226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.135235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.135597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.135929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.135938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.136294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.136622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.136632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.136943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.137255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.137264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.137593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.137896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.137905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.541 qpair failed and we were unable to recover it. 00:33:22.541 [2024-06-07 23:29:45.138250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.541 [2024-06-07 23:29:45.138484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.138493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.138792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.139156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.139164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.139505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.139899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.139907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.140156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.140428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.140437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.140775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.141111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.141119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.141406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.141763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.141773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.142138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.142497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.142506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.142872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.143258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.143268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.143637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.144008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.144017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.144341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.144710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.144719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.145068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.145366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.145375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.145751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.146117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.146126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.146464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.146845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.146854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.147203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.147562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.147571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.147937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.148341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.148350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.148744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.149130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.149139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.149399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.149803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.149812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.150139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.150485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.150494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.150847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.151223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.151232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.151598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.151911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.151920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.152287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.152626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.152635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.152969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.153350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.153360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.153608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.153864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.153873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.154259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.154651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.154659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.154986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.155352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.155361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.155733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.156083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.156092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.156428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.156768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.156777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.157146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.157506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.157515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.157705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.158096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.158105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.542 qpair failed and we were unable to recover it. 00:33:22.542 [2024-06-07 23:29:45.158430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.158768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.542 [2024-06-07 23:29:45.158777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.159146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.159356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.159366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.159634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.159982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.159992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.160343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.160692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.160701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.161044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.161383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.161392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.161757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.162138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.162147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.162505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.162801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.162810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.163166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.163518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.163527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.163926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.164265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.164274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.164629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.164993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.165002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.165354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.165726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.165735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.166100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.166369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.166379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.166733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.167069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.167078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.167453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.167800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.167809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.168134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.168505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.168514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.168867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.169221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.169230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.169559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.169920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.169929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.170331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.170662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.170671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.171044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.171400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.171410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.171809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.172144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.172153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.172505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.172876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.172885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.173210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.173573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.173583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.173946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.174330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.174339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.174669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.175040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.175049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.175396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.175624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.175634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.543 qpair failed and we were unable to recover it. 00:33:22.543 [2024-06-07 23:29:45.176018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.543 [2024-06-07 23:29:45.176236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.176249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.176599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.176914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.176923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.177258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.177630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.177641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.177972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.178327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.178336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.178693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.178915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.178923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.179254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.179609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.179618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.179912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.180161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.180170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.180493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.180871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.180881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.181220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.181523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.181532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.181882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.182266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.182275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.182623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.182861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.182870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.182963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.183290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.183300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.183649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.183883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.183894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.184231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.184582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.184591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.184918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.185292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.185301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.185653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.185950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.185959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.186315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.186658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.186666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.186968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.187315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.187324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.187652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.187968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.187977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.188342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.188724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.188733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.188977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.189317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.189326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.189695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.190035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.190043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.190410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.190781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.190789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.191129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.191471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.191480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.191802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.192159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.192168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.192504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.192876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.192885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.193208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.193573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.193582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.193814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.194190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.194198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.194564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.194896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.194904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.544 qpair failed and we were unable to recover it. 00:33:22.544 [2024-06-07 23:29:45.195275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.544 [2024-06-07 23:29:45.195584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.195592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.195985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.196318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.196327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.196675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.197017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.197025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.197349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.197603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.197612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.197982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.198351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.198360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.198748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.199047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.199055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.199386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.199748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.199757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.200099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.200365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.200374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.200775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.201136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.201144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.201494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.201867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.545 [2024-06-07 23:29:45.201876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.545 qpair failed and we were unable to recover it. 00:33:22.545 [2024-06-07 23:29:45.202212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.202571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.202582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.202931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.203268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.203278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.203634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.204004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.204013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.204345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.204713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.204723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.204922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.205288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.205298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.205647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.206022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.206031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.206360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.206707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.206715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.207040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.207406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.207416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.207676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.208025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.208034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.208323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.208640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.208649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.208998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.209338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.209348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.818 [2024-06-07 23:29:45.209679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.210061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.818 [2024-06-07 23:29:45.210070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.818 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.210423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.210798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.210806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.211137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.211510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.211519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.211897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.212275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.212284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.212617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.212973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.212982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.213338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.213683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.213692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.213987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.214319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.214329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.214577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.214934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.214943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.215194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.215608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.215617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.215870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.216237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.216256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.216578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.216919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.216928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.217258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.217626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.217635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.218043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.218421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.218430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.218710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.219061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.219072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.219394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.219733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.219741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.220108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.220456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.220466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.220865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.221234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.221250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.221573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.221931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.221940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.222287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.222633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.222643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.223010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.223392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.223402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.223632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.223992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.224001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.224388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.224709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.224718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.225031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.225343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.225353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.225713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.226090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.226099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.226467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.226815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.226824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.227171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.227528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.227538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.227911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.228249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.228258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.228590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.228951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.228960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.229286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.229661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.229670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.819 qpair failed and we were unable to recover it. 00:33:22.819 [2024-06-07 23:29:45.230033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.230376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.819 [2024-06-07 23:29:45.230386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.230718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.231069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.231078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.231421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.231742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.231751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.232106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.232453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.232463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.232784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.233121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.233131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.233456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.233705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.233714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.234124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.234285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.234296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.234660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.235033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.235043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.235326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.235676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.235685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.236022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.236388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.236398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.236747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.237105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.237115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.237456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.237826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.237835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.238164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.238402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.238412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.238772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.239008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.239018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.239234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.239667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.239677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.239952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.240302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.240312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.240661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.241016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.241024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.241376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.241736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.241744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.242111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.242456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.242466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.242792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.243163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.243172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.243513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.243886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.243895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.244252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.244593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.244602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.244931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.245287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.245297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.245627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.245892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.245900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.246261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.246584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.246593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.246940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.247325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.247335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.247672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.248028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.248037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.248430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.248736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.248745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.820 [2024-06-07 23:29:45.249104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.249414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.820 [2024-06-07 23:29:45.249423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.820 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.249798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.250133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.250141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.250480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.250831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.250839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.251166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.251435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.251444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.251809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.252166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.252174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.252523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.252858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.252867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.253196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.253551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.253560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.253904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.254165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.254175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.254501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.254847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.254855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.255220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.255550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.255560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.255768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.256119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.256129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.256388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.256731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.256741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.257089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.257446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.257455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.257783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.258151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.258160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.258514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.258870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.258878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.259204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.259540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.259549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.259900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.260187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.260196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.260557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.260900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.260908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.261258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.261597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.261606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.261956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.262206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.262215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.262545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.262892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.262901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.263275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.263653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.263662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.264038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.264418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.264427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.264705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.265062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.265070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.265395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.265649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.265658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.265983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.266344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.266353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.266707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.266970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.266978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.267260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.267587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.267595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.267945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.268298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.268308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.821 qpair failed and we were unable to recover it. 00:33:22.821 [2024-06-07 23:29:45.268541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.821 [2024-06-07 23:29:45.268765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.268775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.269147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.269342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.269352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.269696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.269909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.269919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.270267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.270605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.270613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.270958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.271301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.271310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.271685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.272056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.272065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.272427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.272782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.272791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.273141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.273495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.273504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.273834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.274197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.274206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.274544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.274887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.274896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.275196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.275591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.275600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.275964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.276307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.276317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.276648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.276948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.276956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.277317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.277674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.277683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.278121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.278491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.278500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.278905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.279231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.279240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.279599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.279953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.279962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.280317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.280684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.280692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.281039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.281351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.281361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.281685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.282044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.282053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.282399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.282736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.282744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.283078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.283403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.283412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.283756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.284124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.284133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.284578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.284959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.284969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.285315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3057152 Killed "${NVMF_APP[@]}" "$@" 00:33:22.822 [2024-06-07 23:29:45.285674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.285683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 [2024-06-07 23:29:45.286026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 23:29:45 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:22.822 [2024-06-07 23:29:45.286382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.286392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 23:29:45 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:22.822 [2024-06-07 23:29:45.286677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 23:29:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:22.822 [2024-06-07 23:29:45.286993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.287002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.822 23:29:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:22.822 [2024-06-07 23:29:45.287380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:33:22.822 [2024-06-07 23:29:45.287732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.822 [2024-06-07 23:29:45.287741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.822 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.288104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.288449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.288459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.288833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.289175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.289183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.289534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.289866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.289875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.290220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.290509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.290518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.290859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.291198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.291207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.291577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.291829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.291840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.292191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.292542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.292552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.292895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.293246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.293256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.293513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.293856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.293866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.294121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 23:29:45 -- nvmf/common.sh@469 -- # nvmfpid=3058189 00:33:22.823 [2024-06-07 23:29:45.294505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.294516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 23:29:45 -- nvmf/common.sh@470 -- # waitforlisten 3058189 00:33:22.823 23:29:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:22.823 [2024-06-07 23:29:45.294902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 23:29:45 -- common/autotest_common.sh@819 -- # '[' -z 3058189 ']' 00:33:22.823 [2024-06-07 23:29:45.295068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.295079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 23:29:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.823 [2024-06-07 23:29:45.295423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 23:29:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:22.823 23:29:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.823 [2024-06-07 23:29:45.295792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.295802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 23:29:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:22.823 23:29:45 -- common/autotest_common.sh@10 -- # set +x 00:33:22.823 [2024-06-07 23:29:45.296151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.296499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.296509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.296885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.297269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.297279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.297712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.298073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.298082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.298437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.298767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.298777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.299145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.299283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.299293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.299696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.300066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.300076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.300376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.300725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.300738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.301127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.301504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.301514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.823 [2024-06-07 23:29:45.301863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.302249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.823 [2024-06-07 23:29:45.302259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.823 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.302594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.302925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.302935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.303302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.303714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.303724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.304100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.304367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.304378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.304604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.304911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.304921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.305319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.305704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.305713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.306067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.306453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.306463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.306793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.307097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.307106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.307466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.307803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.307812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.308189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.308574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.308584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.308936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.309043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.309053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.309414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.309626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.309636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.309851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.310087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.310097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.310447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.310804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.310812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.311184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.311542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.311553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.311908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.312237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.312251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.312409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.312684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.312693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.313071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.313324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.313334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.313696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.313946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.313956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.314314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.314543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.314553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.314907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.315275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.315285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.315639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.315892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.315901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.316254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.316579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.316588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.316830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.317193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.317202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.317436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.317790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.317800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.318171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.318291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.318302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.318676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.319055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.319064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.319414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.319785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.319794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.320124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.320461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.320471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.824 qpair failed and we were unable to recover it. 00:33:22.824 [2024-06-07 23:29:45.320795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.824 [2024-06-07 23:29:45.321171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.321180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.321561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.321899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.321907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.322088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.322425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.322434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.322791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.323165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.323174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.323408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.323747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.323756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.324127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.324474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.324484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.324865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.325204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.325213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.325453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.325797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.325807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.326181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.326565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.326574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.326917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.327137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.327146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.327529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.327913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.327924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.328281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.328583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.328593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.328973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.329356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.329366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.329579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.329804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.329814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.330170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.330545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.330555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.330905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.331171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.331181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.331522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.331907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.331917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.332266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.332619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.332629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.333011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.333160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.333170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.333522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.333742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.333752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.334072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.334450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.334462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.334812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.335193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.335203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.335541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.335771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.335781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.336136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.336506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.336516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.336884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.337225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.337235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.337612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.337995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.338005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.338357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.338721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.338731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.339083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.339445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.339455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.825 qpair failed and we were unable to recover it. 00:33:22.825 [2024-06-07 23:29:45.339702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.825 [2024-06-07 23:29:45.340033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.340043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.340353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.340733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.340743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.341095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.341393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.341403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.341759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.341984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.341994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.342258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.342585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.342594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.342920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.343161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.343170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.343525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.343857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.343865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.344086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.344327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.344337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.344724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.344968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.344976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.345262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.345607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.345616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.345844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.345981] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:22.826 [2024-06-07 23:29:45.346034] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.826 [2024-06-07 23:29:45.346226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.346237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.346535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.346881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.346890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.347228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.347585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.347596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.347940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.348319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.348330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.348704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.349038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.349048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.349398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.349754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.349765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.349981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.350334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.350344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.350711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.351090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.351099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.351452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.351809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.351819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.352171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.352529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.352540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.352908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.353292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.353303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.353650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.354045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.354055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.354403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.354804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.354814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.355162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.355387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.355398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.355744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.356001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.356011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.356348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.356723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.356733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.356947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.357262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.357273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.357589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.357840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.826 [2024-06-07 23:29:45.357850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.826 qpair failed and we were unable to recover it. 00:33:22.826 [2024-06-07 23:29:45.358218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.358566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.358576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.358923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.359259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.359269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.359663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.360039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.360049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.360324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.360597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.360608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.360968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.361305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.361316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.361680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.362039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.362048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.362239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.362511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.362520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.362871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.363251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.363261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.363600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.363885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.363894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.364254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.364606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.364616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.364979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.365353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.365363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.365593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.365955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.365964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.366325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.366692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.366701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.367052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.367399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.367409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.367628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.367969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.367982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.368331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.368593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.368602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.368856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.369273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.369282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.369687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.370024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.370034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.370364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.370734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.370743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.371041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.371378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.371387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.371768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.372121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.372130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.372492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.372838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.372847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.373214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.373585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.373596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.373868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.374225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.374234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.827 qpair failed and we were unable to recover it. 00:33:22.827 [2024-06-07 23:29:45.374561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.827 [2024-06-07 23:29:45.374965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.374975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.375323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.375668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.375678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.375900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.376267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.376277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.376597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.376969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.376978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.377346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.377579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.377588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.377955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.378291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.378301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.378674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.828 [2024-06-07 23:29:45.379026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.379036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.379287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.379642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.379651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.379980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.380326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.380336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.380666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.380979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.380989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.381344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.381594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.381604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.381951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.382400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.382409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.382761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.382987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.382995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.383327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.383724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.383733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.384081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.384438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.384448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.384873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.385250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.385260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.385605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.386007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.386016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.386348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.386778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.386787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.387123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.387463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.387473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.387802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.388168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.388177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.388521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.388881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.388890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.389206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.389580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.389590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.389809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.390183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.390192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.390534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.390895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.390905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.391222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.391633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.391643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.392045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.392468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.392506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.392889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.393200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.393210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.393572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.393822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.393832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.394206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.394546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.828 [2024-06-07 23:29:45.394556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.828 qpair failed and we were unable to recover it. 00:33:22.828 [2024-06-07 23:29:45.394898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.395125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.395134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.395463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.395722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.395732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.396086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.396331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.396342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.396757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.397084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.397093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.397448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.397785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.397794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.398121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.398450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.398460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.398693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.399070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.399079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.399305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.399688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.399697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.400021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.400346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.400357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.400715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.401049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.401058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.401384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.401760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.401770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.402101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.402405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.402452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.402654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.402976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.402988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.403331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.403710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.403720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.403995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.404348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.404358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.404686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.405051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.405061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.405390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.405772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.405781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.406110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.406490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.406500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.406827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.407029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.407040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.407361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.407690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.407699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.408068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.408444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.408454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.408809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.409042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.409051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.409410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.409652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.409664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.409997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.410328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.410339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.410706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.411045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.411054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.411266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.411665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.411674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.411996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.412375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.412384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.829 qpair failed and we were unable to recover it. 00:33:22.829 [2024-06-07 23:29:45.412714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.829 [2024-06-07 23:29:45.412982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.412991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.413315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.413658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.413667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.413988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.414352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.414361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.414708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.415052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.415061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.415429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.415806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.415815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.416143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.416496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.416506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.416780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.417137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.417146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.417494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.417828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.417838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.418230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.418565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.418575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.418869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.419240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.419254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.419543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.419897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.419906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.420213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.420464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.420474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.420812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.421170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.421179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.421522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.421672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.421681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.421992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.422349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.422359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.422718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.423086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.423095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.423420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.423737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.423746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.424115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.424532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.424542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.424940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.425281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.425291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.425633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.425974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.425983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.426213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.426472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.426482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.426841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.427184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.427193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.427431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.427690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.427699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.428067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.428372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.428389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.428743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.429077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.429086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.429430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.429633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.429642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.830 qpair failed and we were unable to recover it. 00:33:22.830 [2024-06-07 23:29:45.430006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.430388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.830 [2024-06-07 23:29:45.430397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.430654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.431014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.431022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.431317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.431529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.431537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.431740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.432112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.432121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.432211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:22.831 [2024-06-07 23:29:45.432465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.432852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.432861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.433119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.433382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.433392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.433767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.434100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.434109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.434310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.434576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.434585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.434791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.435043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.435052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.435398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.435694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.435703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.436021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.436391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.436400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.436741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.437101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.437110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.437457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.437806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.437815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.438182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.438525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.438535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.438871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.439263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.439273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.439626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.439999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.440008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.440217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.440561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.440570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.440976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.441371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.441380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.441753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.442115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.442124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.442457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.442821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.442830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.443202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.443403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.443414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.443781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.444006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.444016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.444395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.444729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.444738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.445094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.445483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.445492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.445847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.446205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.446214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.446562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.446931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.446941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.447273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.447501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.447511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.447870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.448214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.448224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.448628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.449019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.831 [2024-06-07 23:29:45.449029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.831 qpair failed and we were unable to recover it. 00:33:22.831 [2024-06-07 23:29:45.449300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.449652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.449661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.450033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.450231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.450247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.450530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.450763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.450778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.451136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.451499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.451509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.451845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.452209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.452219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.452446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.452710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.452719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.453073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.453425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.453435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.453797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.454108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.454117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.454449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.454663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.454672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.455025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.455359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.455373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.455635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.455975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.455984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.456308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.456584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.456593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.456930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.457255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.457265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.457616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.458028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.458037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.458380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.458610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.458618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.458983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.459321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.832 [2024-06-07 23:29:45.459331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.832 qpair failed and we were unable to recover it. 00:33:22.832 [2024-06-07 23:29:45.459683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.460016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.460026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.460367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.460709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.460717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.461081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.461417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.461427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.461780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.461859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:22.833 [2024-06-07 23:29:45.461959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.461972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.461987] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.833 [2024-06-07 23:29:45.461996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.833 [2024-06-07 23:29:45.462004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.833 [2024-06-07 23:29:45.462313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.462172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:22.833 [2024-06-07 23:29:45.462311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:22.833 [2024-06-07 23:29:45.462719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.462728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.462679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:22.833 [2024-06-07 23:29:45.462679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:22.833 [2024-06-07 23:29:45.463028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.463424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.463433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.463842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.464226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.464235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.464642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.464965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.464975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.465218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.465497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.465507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.465871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.466246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.466256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.466591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.466855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.466864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.833 qpair failed and we were unable to recover it. 00:33:22.833 [2024-06-07 23:29:45.467250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.467609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.833 [2024-06-07 23:29:45.467618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.467823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.468099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.468108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.468471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.468703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.468712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.468949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.469326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.469336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.469560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.469870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.469880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.470240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.470599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.470608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.470711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.471019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.471027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.471258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.471667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.471676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.471793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.472005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.472022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.472360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.472844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.472853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.473198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.473501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.473511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.473872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.474131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.474140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.474515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.474935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.474944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.475188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.475414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.475423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.475626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.475975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.475984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.476402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.476643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.476652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.476742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.477074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.477083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.477455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.477745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.477754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.478123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.478382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.478392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.478688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.479069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.479078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.479410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.479724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.479733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.480089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.480354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.480364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.480627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.480855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.480864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.481214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.481535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.481545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.481873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.482254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.482264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.482484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.482853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.482863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.483273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.483614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.483624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.483969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.484348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.484358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.484714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.484962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.834 [2024-06-07 23:29:45.484971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.834 qpair failed and we were unable to recover it. 00:33:22.834 [2024-06-07 23:29:45.485380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.835 [2024-06-07 23:29:45.485591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.835 [2024-06-07 23:29:45.485600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.835 qpair failed and we were unable to recover it. 00:33:22.835 [2024-06-07 23:29:45.485944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.835 [2024-06-07 23:29:45.486162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.835 [2024-06-07 23:29:45.486170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:22.835 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.486524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.486865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.486875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.487218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.487595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.487604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.487960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.488186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.488198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.488426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.488601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.488612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.488958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.489171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.489180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.489558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.489913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.489922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.490271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.490461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.490470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.490705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.490822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.490831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.491209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.491576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.491585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.491864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.492251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.122 [2024-06-07 23:29:45.492261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.122 qpair failed and we were unable to recover it. 00:33:23.122 [2024-06-07 23:29:45.492535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.492773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.492782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.492968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.493362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.493372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.493753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.494044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.494053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.494471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.494826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.494835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.495045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.495469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.495479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.495812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.496160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.496170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.496409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.496729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.496738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.496954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.497231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.497240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.497457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.497779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.497788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.497995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.498326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.498336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.498666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.498893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.498902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.499267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.499495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.499504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.499833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.500086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.500095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.500447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.500705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.500715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.501065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.501448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.501457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.501663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.502045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.502054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.502394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.502759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.502768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.503112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.503322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.503332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.503643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.503988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.503996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.504202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.504414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.504423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.504780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.504885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.504895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.505239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.505601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.505610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.505981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.506195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.506204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.123 qpair failed and we were unable to recover it. 00:33:23.123 [2024-06-07 23:29:45.506398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.506766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.123 [2024-06-07 23:29:45.506775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.506978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.507346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.507355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.507734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.508116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.508124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.508523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.508864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.508873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.509258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.509612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.509621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.509953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.510254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.510264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.510464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.510725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.510734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.511089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.511496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.511505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.511851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.512212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.512220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.512473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.512853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.512862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.513195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.513555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.513564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.513862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.514033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.514042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.514386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.514734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.514743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.515124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.515476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.515485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.515825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.516207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.516216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.516566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.516940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.516950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.517256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.517646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.517655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.517860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.517933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.517941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.518281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.518479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.518487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.518898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.519237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.519250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.519620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.519888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.519899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.520231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.520477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.520486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.124 qpair failed and we were unable to recover it. 00:33:23.124 [2024-06-07 23:29:45.520868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.124 [2024-06-07 23:29:45.521211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.521219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.521608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.521956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.521965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.522050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.522372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.522381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.522785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.523079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.523089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.523455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.523817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.523825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.524110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.524456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.524465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.524807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.525170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.525178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.525357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.525686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.525695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.525915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.526156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.526169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.526524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.526735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.526744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.527106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.527437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.527447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.527618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.527880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.527889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.528251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.528676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.528684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.529023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.529089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.529097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.529407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.529758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.529767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.530117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.530326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.530336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.530681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.530915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.530924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.531178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.531539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.531548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.531935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.532336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.532345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.532674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.532932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.532941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.533306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.533616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.533625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.125 qpair failed and we were unable to recover it. 00:33:23.125 [2024-06-07 23:29:45.533984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.534325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.125 [2024-06-07 23:29:45.534334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.534672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.534865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.534874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.534931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.535210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.535218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.535561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.535791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.535800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.536142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.536537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.536546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.536882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.537252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.537262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.537595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.537811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.537819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.538202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.538546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.538556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.538885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.539258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.539267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.539665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.540053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.540062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.540391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.540768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.540777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.540961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.541311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.541320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.541659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.542021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.542030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.542274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.542606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.542614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.542951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.543286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.543298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.543663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.543909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.543918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.544326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.544579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.544589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.544809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.545158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.545167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.545478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.545820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.545829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.546165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.546535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.546544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.546795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.547171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.547180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.547529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.547759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.547767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.548108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.548485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.548494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.548725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.548926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.548935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.549292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.549620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.549629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.126 [2024-06-07 23:29:45.549689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.549838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.126 [2024-06-07 23:29:45.549847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.126 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.550097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.550452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.550461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.550801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.551015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.551024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.551256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.551682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.551691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.552030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.552259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.552270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.552458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.552678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.552687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.553073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.553347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.553357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.553701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.554048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.554058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.554415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.554767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.554777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.555130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.555345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.555355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.555622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.555804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.555813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.556215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.556649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.556659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.556950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.557334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.557343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.557439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.557798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.557809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.558095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.558427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.558437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.558790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.559021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.559030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.559234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.559640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.559649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.559848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.560188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.560197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.560540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.560896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.560904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.561089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.561423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.561432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.561681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.562064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.562073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.562417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.562787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.562796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.562989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.563162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.563171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.563538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.563770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.563779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.564185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.564445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.564454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.564807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.565070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.127 [2024-06-07 23:29:45.565080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.127 qpair failed and we were unable to recover it. 00:33:23.127 [2024-06-07 23:29:45.565448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.565816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.565825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.566194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.566402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.566411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.566632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.567020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.567029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.567324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.567701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.567709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.568042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.568282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.568291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.568552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.568786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.568795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.569079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.569410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.569419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.569780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.570157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.570165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.570507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.570755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.570764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.571130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.571502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.571512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.571730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.572127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.572137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.572499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.572858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.572867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.573077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.573466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.573475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.573647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.573916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.573925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.574007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.574216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.574225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.574597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.574947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.128 [2024-06-07 23:29:45.574956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.128 qpair failed and we were unable to recover it. 00:33:23.128 [2024-06-07 23:29:45.575342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.575574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.575584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.575832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.576228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.576236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.576572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.576922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.576930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.577259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.577587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.577596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.577792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.578110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.578118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.578550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.578774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.578783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.579005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.579226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.579235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.579499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.579847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.579856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.580076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.580412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.580422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.580709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.581048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.581057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.581414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.581794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.581803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.582006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.582209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.582218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.582477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.582568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.582576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.582852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.583059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.583068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.583303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.583528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.583537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.583884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.584260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.584269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.584620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.584892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.584900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.585103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.585463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.585472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.585801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.586165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.586174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.586529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.586918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.586928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.129 [2024-06-07 23:29:45.587132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.587466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.129 [2024-06-07 23:29:45.587475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.129 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.587575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.587860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.587869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.588075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.588261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.588273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.588604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.588941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.588950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.589190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.589317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.589326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.589529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.589852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.589861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.590202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.590570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.590579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.590767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.591103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.591111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.591177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.591494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.591504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.591840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.592052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.592060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.592418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.592782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.592790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.593146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.593395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.593404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.593776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.593980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.593989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.594227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.594481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.594490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.594697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.595013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.595022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.595206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.595545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.595555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.595930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.596268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.596278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.596495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.596776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.596785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.597137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.597353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.597362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.597727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.597972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.597982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.598238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.598451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.598460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.598786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.599000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.130 [2024-06-07 23:29:45.599008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.130 qpair failed and we were unable to recover it. 00:33:23.130 [2024-06-07 23:29:45.599354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.599633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.599643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.600002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.600068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.600077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.600377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.600760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.600769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.601111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.601453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.601462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.601678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.602053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.602063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.602361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.602526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.602535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.602942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.603284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.603293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.603637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.604008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.604017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.604351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.604581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.604590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.604824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.605052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.605060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.605408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.605794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.605802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.606135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.606342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.606351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.606722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.606780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.606790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.607125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.607310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.607319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.607683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.607980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.607988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.608174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.608380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.608389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.608763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.609101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.609111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.609407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.609799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.609807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.610057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.610437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.610446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.610773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.611147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.611156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.611354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.611737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.611746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.612155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.612392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.131 [2024-06-07 23:29:45.612402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.131 qpair failed and we were unable to recover it. 00:33:23.131 [2024-06-07 23:29:45.612461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.612789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.612797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.613131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.613479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.613489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.613684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.613928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.613936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.614282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.614510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.614519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.614785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.615140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.615149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.615519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.615855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.615865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.616221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.616560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.616570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.616860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.617048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.617057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.617320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.617675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.617684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.618045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.618409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.618422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.618804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.619140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.619150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.619495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.619696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.619705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.619920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.620018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.620026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.620356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.620733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.620742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.620921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.621258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.621267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.621619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.621927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.621935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.622294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.622640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.622649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.622979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.623371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.623380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.623724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.624076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.624085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.624331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.624692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.624703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.625072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.625443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.625453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.625654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.625857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.625866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.132 [2024-06-07 23:29:45.626216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.626556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.132 [2024-06-07 23:29:45.626565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.132 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.626918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.627305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.627315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.627559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.627732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.627741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.628100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.628441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.628451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.628665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.629065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.629074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.629269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.629502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.629511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.629859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.630216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.630225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.630552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.630913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.630922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.631158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.631503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.631512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.631753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.632142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.632151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.632520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.632854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.632864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.633217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.633481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.633490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.633834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.634041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.634051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.634275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.634603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.634611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.634952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.635319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.635328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.635698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.636062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.636070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.636398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.636618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.636626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.636828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.637025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.637035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.637282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.637665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.637673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.638044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.638381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.638390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.638745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.639056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.639065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.639398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.639762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.639771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.640117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.640293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.640302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.640611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.640971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.640980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.641310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.641541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.133 [2024-06-07 23:29:45.641549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.133 qpair failed and we were unable to recover it. 00:33:23.133 [2024-06-07 23:29:45.641918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.642143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.642152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.642360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.642555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.642565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.642997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.643229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.643240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.643515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.643722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.643731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.644146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.644498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.644508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.644703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.645040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.645049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.645385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.645588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.645597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.645963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.646308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.646317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.646681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.646904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.646913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.647249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.647598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.647608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.647781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.648154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.648164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.648377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.648761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.648771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.649125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.649449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.649458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.649817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.650165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.650175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.650534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.650679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.650688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.651049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.651302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.651311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.651716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.652057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.652066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.652285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.652383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.652392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.652749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.653092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.653102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.653319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.653686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.134 [2024-06-07 23:29:45.653695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.134 qpair failed and we were unable to recover it. 00:33:23.134 [2024-06-07 23:29:45.654046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.654435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.654447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.654791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.655015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.655023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.655363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.655711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.655720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.656000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.656368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.656379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.656743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.657060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.657070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.657445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.657679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.657688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.658039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.658404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.658413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.658636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.659045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.659054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.659430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.659790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.659799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.660139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.660485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.660494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.660840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.661143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.661152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.661491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.661857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.661865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.662196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.662567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.662577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.662917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.663125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.663135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.663496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.663877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.663886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.664219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.664580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.664589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.664794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.665157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.665166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.665516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.665883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.665892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.666219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.666581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.666591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.666933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.667278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.667287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.667618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.667868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.667877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.668253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.668599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.668608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.135 qpair failed and we were unable to recover it. 00:33:23.135 [2024-06-07 23:29:45.668835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.669161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.135 [2024-06-07 23:29:45.669169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.669565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.669754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.669763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.670095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.670353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.670362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.670727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.671064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.671072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.671458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.671796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.671805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.672027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.672372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.672381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.672748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.673006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.673016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.673204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.673517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.673526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.673860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.674224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.674232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.674611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.674986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.674994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.675204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.675404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.675413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.675830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.676212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.676220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.676281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.676588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.676597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.676939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.677153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.677162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.677487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.677685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.677695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.678092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.678291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.678300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.678626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.678964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.678973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.679324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.679414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.679422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.679548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.679814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.679822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.680174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.680381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.680391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.680595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.680968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.680976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.681304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.681683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.681692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.136 [2024-06-07 23:29:45.682025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.682387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.136 [2024-06-07 23:29:45.682396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.136 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.682573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.682796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.682805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.682996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.683340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.683350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.683551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.683892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.683901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.684250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.684450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.684459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.684827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.685166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.685174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.685377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.685740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.685749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.686104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.686453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.686462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.686828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.687165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.687174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.687380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.687600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.687608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.687912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.688265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.688276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.688519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.688858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.688867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.689202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.689675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.689685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.690018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.690308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.690317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.690587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.690926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.690936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.691279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.691632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.691641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.691850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.692091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.692099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.692431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.692648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.692656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.693059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.693439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.693448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.693672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.693873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.693881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.694089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.694399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.694409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.694816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.695017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.695026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.695360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.695704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.695713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.695926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.696266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.137 [2024-06-07 23:29:45.696276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.137 qpair failed and we were unable to recover it. 00:33:23.137 [2024-06-07 23:29:45.696498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.696819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.696827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.697019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.697344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.697353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.697580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.697820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.697829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.698183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.698509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.698518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.698857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.699234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.699246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.699636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.699957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.699966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.700173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.700390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.700399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.700765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.700983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.700992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.701436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.701670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.701679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.702062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.702306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.702322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.702679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.702976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.702985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.703346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.703584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.703593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.703941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.704299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.704308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.704577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.704916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.704925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.705151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.705369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.705378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.705606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.705796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.705806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.706161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.706502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.706511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.706729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.707079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.707087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.707414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.707761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.707770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.708107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.708321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.708330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.708699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.709052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.709061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.709418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.709615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.709623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.709977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.710324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.710334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.710632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.710847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.710856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.711030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.711349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.711358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.711752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.712139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.712148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.712320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.712654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.712663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.713039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.713401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.713410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.713788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.714020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.714029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.138 qpair failed and we were unable to recover it. 00:33:23.138 [2024-06-07 23:29:45.714195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.138 [2024-06-07 23:29:45.714594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.714603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.714929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.715295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.715305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.715628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.716004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.716012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.716344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.716722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.716730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.717072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.717227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.717235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.717592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.717921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.717930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.718351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.718735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.718744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.719088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.719457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.719466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.719830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.720215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.720226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.720439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.720701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.720710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.720917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.721280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.721290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.721662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.721867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.721876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.722223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.722291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.722301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.722627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.722965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.722974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.723141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.723477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.723486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.723725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.723891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.723900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.724266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.724621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.724629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.724963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.725313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.725323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.725701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.726037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.726046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.726349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.726529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.726537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.726969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.727160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.727170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.727547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.727922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.727932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.728267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.728530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.728539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.728772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.729007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.729016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.139 [2024-06-07 23:29:45.729391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.729740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.139 [2024-06-07 23:29:45.729750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.139 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.729904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.730250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.730260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.730585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.730789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.730797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.731128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.731420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.731429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.731795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.732151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.732160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.732486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.732875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.732884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.733215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.733596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.733606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.733932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.734161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.734170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.734575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.734939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.734948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.735254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.735510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.735519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.735917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.736117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.736126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.736188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.736525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.736534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.736866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.737229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.737238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.737510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.737894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.737903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.738254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.738607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.738616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.738866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.739208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.739216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.739596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.739945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.739954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.740310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.740555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.740564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.740904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.741260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.741270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.741639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.742019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.742027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.742440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.742645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.742653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.743048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.743258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.743268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.743467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.743754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.743764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.744119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.744540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.744549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.744801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.745160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.745169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.745509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.745747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.745757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.746154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.746508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.746518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.746776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.747160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.747169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.747531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.747890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.747899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.140 [2024-06-07 23:29:45.748148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.748207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.140 [2024-06-07 23:29:45.748215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.140 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.748582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.748937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.748947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.749375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.749689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.749698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.750051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.750260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.750270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.750623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.750969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.750978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.751188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.751416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.751427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.751797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.752000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.752011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.752347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.752685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.752694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.752983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.753356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.753366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.753693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.754062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.754070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.754422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.754658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.754667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.755046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.755205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.755214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.755423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.755817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.755826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.756154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.756470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.756480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.756836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.757173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.757182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.757563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.757733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.757741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.758034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.758404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.758415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.758745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.758947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.758956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.759240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.759512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.759521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.759901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.760106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.760115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.760446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.760784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.760793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.761120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.761494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.761504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.761829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.762010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.762018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.762368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.762723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.762732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.763092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.763309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.763319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.763680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.763906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.763915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.764126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.764329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.764338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.764564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.764984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.764994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.765398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.765709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.765718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.141 [2024-06-07 23:29:45.766085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.766427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.141 [2024-06-07 23:29:45.766436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.141 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.766760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.767141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.767150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.767343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.767705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.767714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.767949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.768333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.768342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.768550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.768813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.768822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.769153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.769470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.769478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.769814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.770227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.770235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.770581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.770782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.770791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.771146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.771499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.771509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.771901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.772123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.772131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.772476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.772850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.772859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.773304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.773659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.773667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.773996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.774372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.774381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.774732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.775077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.775085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.775374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.775581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.775590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.776001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.776213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.776222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.776553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.776887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.776896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.777257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.777601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.777609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.777955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.778267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.778276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.778615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.778970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.778979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.779276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.779601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.779610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.779943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.780150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.780159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.780508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.780721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.780730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.781067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.781431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.781440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.781628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.781977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.781985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.782180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.782546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.782555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.782917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.783287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.783296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.783649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.783873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.783883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.784088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.784355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.784364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.142 [2024-06-07 23:29:45.784582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.784810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.142 [2024-06-07 23:29:45.784819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.142 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.785033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.785407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.785416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.785747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.786103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.786111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.786451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.786787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.786796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.786863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.787193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.787202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.787543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.787883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.787892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.788259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.788576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.788585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.788734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.789050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.789058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.789268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.789599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.789608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.789993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.790358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.790370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.790796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.791126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.791135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.791294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.791639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.791648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.792002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.792213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.792221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.792480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.792823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.792832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.792899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.793223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.793232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.143 [2024-06-07 23:29:45.793480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.793777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.143 [2024-06-07 23:29:45.793787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.143 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.794026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.794259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.794271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.794612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.794950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.794958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.795321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.795548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.795556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.795888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.796254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.796263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.796545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.796939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.796947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.797188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.797450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.797459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.797810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.798193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.798202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.798548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.798917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.798925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.799296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.799392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.799403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.799690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.800071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.800080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.800393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.800761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.800769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.801134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.801361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.801370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.801721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.802099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.802107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.802519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.802871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.802879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.803209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.803485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.803495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.803864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.804210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.804218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.413 [2024-06-07 23:29:45.804567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.804925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.413 [2024-06-07 23:29:45.804934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.413 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.805166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.805325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.805334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.805687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.806033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.806041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.806379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.806593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.806602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.806964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.807307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.807322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.807714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.807943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.807951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.808163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.808552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.808561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.808895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.809264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.809273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.809738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.810004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.810013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.810385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.810596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.810606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.810959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.811339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.811349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.811560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.811946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.811954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.812204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.812539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.812548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.812884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.813252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.813261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.813586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.813935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.813944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.814205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.814577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.814586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.814921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.815266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.815275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.815630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.816053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.816061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.816411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.816774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.816783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.817066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.817370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.817379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.817739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.817796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.817805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.818157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.818513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.818523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.818704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.819103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.819112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.819587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.819793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.819802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.820010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.820326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.820339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.820699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.820940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.820950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.821308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.821692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.821700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.822081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.822429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.822438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.414 qpair failed and we were unable to recover it. 00:33:23.414 [2024-06-07 23:29:45.822650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.414 [2024-06-07 23:29:45.822876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.822887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.823235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.823613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.823623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.823882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.824262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.824271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.824467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.824854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.824862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.825016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.825362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.825370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.825629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.825976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.825984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.826197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.826408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.826418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.826779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.826871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.826880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.827140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.827491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.827500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.827884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.828264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.828274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.828609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.828835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.828844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.829050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.829239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.829257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.829615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.829951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.829960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.830294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.830486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.830495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.830729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.831066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.831075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.831405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.831766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.831775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.831983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.832212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.832220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.832684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.833025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.833033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.833365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.833700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.833709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.833956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.834344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.834353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.834531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.834849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.834858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.835203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.835441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.835451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.835683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.836024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.836033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.836399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.836759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.836768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.837125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.837476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.837485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.837829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.838054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.838063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.838283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.838599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.838608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.415 [2024-06-07 23:29:45.838962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.839182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.415 [2024-06-07 23:29:45.839191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.415 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.839542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.839770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.839779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.840177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.840556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.840565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.840930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.841156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.841165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.841527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.841895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.841904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.842082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.842294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.842303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.842658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.842874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.842883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.843137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.843478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.843488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.843842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.844069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.844077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.844418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.844715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.844724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.845103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.845521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.845530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.845874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.846233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.846246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.846577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.846894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.846903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.847213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.847481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.847491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.847864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.848104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.848113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.848337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.848673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.848683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.849058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.849418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.849427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.849808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.850117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.850127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.850477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.850859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.850868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.851197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.851602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.851612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.851958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.852150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.852159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.852504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.852796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.852805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.853097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.853414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.853424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.853630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.853694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.853703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.854049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.854272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.854283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.854457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.854799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.854808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.855165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.855391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.855400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.855765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.856097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.856107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.416 qpair failed and we were unable to recover it. 00:33:23.416 [2024-06-07 23:29:45.856445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.416 [2024-06-07 23:29:45.856677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.856686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.857034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.857278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.857294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.857514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.857867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.857876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.858087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.858460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.858469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.858534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.858796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.858805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.858873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.859218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.859227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.859472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.859852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.859863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.860191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.860396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.860405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.860728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.861114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.861122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.861330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.861712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.861721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.862054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.862398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.862408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.862769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.863110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.863118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.863589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.863654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.863663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.863967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.864314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.864324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.864469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.864859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.864868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.865246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.865488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.865497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.865870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.866208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.866216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.866580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.866969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.866978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.867341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.867527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.867536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.867911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.868254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.868263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.868485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.868866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.868875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.869226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.869545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.869554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.869890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.870176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.870184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.870450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.870847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.870856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.871218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.871598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.871607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.871936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.872139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.872147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.872487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.872841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.872849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.873192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.873555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.417 [2024-06-07 23:29:45.873564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.417 qpair failed and we were unable to recover it. 00:33:23.417 [2024-06-07 23:29:45.873939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.874331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.874340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.874688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.875076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.875084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.875435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.875800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.875809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.876173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.876530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.876540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.876872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.877233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.877252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.877605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.877957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.877965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.878335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.878601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.878610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.879009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.879259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.879268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.879635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.880014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.880022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.880356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.880736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.880745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.880940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.881160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.881168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.881535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.881924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.881932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.882261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.882612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.882621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.882827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.883202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.883211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.883587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.883969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.883978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.884306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.884681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.884689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.885027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.885223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.885231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.885594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.885801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.885811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.886149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.886338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.886347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.886597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.886959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.886968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.887320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.887547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.887557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.887891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.888289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.418 [2024-06-07 23:29:45.888299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.418 qpair failed and we were unable to recover it. 00:33:23.418 [2024-06-07 23:29:45.888659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.888877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.888886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.889241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.889585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.889594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.889951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.890296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.890306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.890717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.890949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.890957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.891296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.891653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.891662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.891989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.892397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.892407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.892727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.892944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.892953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.893191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.893578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.893590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.893935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.894186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.894194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.894573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.894773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.894782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.895135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.895493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.895502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.895833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.896219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.896228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.896571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.896930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.896939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.897213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.897453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.897462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.897691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.897921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.897929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.898297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.898480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.898488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.898837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.899039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.899047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.899314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.899659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.899668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.900076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.900279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.900288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.900629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.900929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.900938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.901141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.901515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.901524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.901744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.902030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.902039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.902407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.902614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.902623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.902980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.903321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.903331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.903709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.904070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.904079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.904433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.904774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.904783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.905123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.905313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.905322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.905644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.906000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.906009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.419 qpair failed and we were unable to recover it. 00:33:23.419 [2024-06-07 23:29:45.906364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.419 [2024-06-07 23:29:45.906716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.906726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.906926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.907277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.907286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.907531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.907915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.907925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.908138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.908443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.908452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.908825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.909194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.909202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.909403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.909825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.909834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.910182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.910528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.910537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.910871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.911075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.911084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.911465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.911802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.911811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.912026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.912405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.912414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.912757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.913108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.913116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.913312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.913634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.913642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.913858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.914234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.914252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.914452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.914879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.914888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.915083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.915453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.915462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.915847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.916077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.916087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.916452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.916655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.916663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.916910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.917230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.917239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.917653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.917986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.917995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.918195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.918500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.918509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.918848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.919182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.919192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.919566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.919778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.919787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.920160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.920386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.920396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.920579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.920981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.920989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.921359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.921579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.921588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.921859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.922063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.922071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.922463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.922807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.922817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.420 qpair failed and we were unable to recover it. 00:33:23.420 [2024-06-07 23:29:45.923169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.923517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.420 [2024-06-07 23:29:45.923527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.923886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.924223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.924231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.924451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.924682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.924691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.925049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.925309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.925320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.925732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.926065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.926073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.926279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.926585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.926594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.926781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.927090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.927098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.927431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.927659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.927668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.927931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.928296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.928305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.928630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.928980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.928988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.929213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.929596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.929605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.929937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.930274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.930283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.930711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.930916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.930925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.931253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.931611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.931620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.931970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.932327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.932336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.932682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.933041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.933050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.933218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.933580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.933589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.933972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.934385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.934394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.934755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.935169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.935178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.935561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.935891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.935900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.936220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.936602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.936611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.936961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.937298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.937308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.937663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.937878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.937886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.938296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.938554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.938564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.938939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.939291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.939300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.939484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.939807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.939816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.421 qpair failed and we were unable to recover it. 00:33:23.421 [2024-06-07 23:29:45.940057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.421 [2024-06-07 23:29:45.940436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.940445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.940785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.941145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.941153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.941465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.941824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.941833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.941899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.942174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.942184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.942549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.942934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.942943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.942996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.943185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.943194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.943568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.943934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.943944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.944316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.944666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.944675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.944978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.945333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.945343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.945690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.946051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.946059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.946356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.946736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.946744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.947075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.947418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.947427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.947693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.948038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.948047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.948250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.948608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.948617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.948954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.949331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.949912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.949981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.950215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.950543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.950552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.950918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.951317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.951327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.951550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.951916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.951926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.952276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.952386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.952393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.952661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.953009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.953017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.953347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.953570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.953579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.953718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.954104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.954113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.954451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.954829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.954837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.955054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.955380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.955389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.955696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.956038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.422 [2024-06-07 23:29:45.956047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.422 qpair failed and we were unable to recover it. 00:33:23.422 [2024-06-07 23:29:45.956308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.956491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.956499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.956867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.957249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.957260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.957544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.957912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.957921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.958134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.958496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.958506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.958836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.959192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.959202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.959555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.959891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.959900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.960255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.960599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.960609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.960818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.961147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.961157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.961507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.961887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.961895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.962246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.962592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.962602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.962952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.963175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.963185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.963554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.963895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.963905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.964220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.964582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.964592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.964942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.965169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.965179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.965538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.965915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.965925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.966280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.966547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.966556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.966916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.967295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.967305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.967650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.967994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.968003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.968334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.968602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.968611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.968965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.969179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.969188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.969582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.970038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.970048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.970229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.970576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.970585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.970875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.971260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.971270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.971603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.971981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.971989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.972362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.972568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.972578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.423 qpair failed and we were unable to recover it. 00:33:23.423 [2024-06-07 23:29:45.972943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.423 [2024-06-07 23:29:45.973298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.973308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.973525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.973840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.973849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.974030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.974375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.974385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.974765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.974921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.974930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.975264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.975615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.975624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.975949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.976312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.976321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.976524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.976730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.976739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.976992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.977365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.977374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.977718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.978092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.978101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.978305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.978738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.978747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.978949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.979319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.979328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.979669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.979865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.979874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.980215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.980578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.980587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.980777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.980994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.981004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.981250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.981402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.981412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.981758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.981973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.981982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.982344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.982505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.982513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.982945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.983305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.983315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.983649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.983879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.983888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.984219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.984560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.984570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.984896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.985274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.985283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.985474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.985646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.985655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.985982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.986178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.986187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.986579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.986919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.986928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.987274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.987664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.987673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.988057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.988412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.988421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.988632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.989007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.989016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.989397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.989771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.989782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.424 qpair failed and we were unable to recover it. 00:33:23.424 [2024-06-07 23:29:45.990138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.990343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.424 [2024-06-07 23:29:45.990352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.990718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.990929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.990938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.991291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.991652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.991661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.992007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.992391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.992399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.992740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.993090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.993099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.993434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.993641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.993650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.994017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.994432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.994441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.994771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.995149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.995158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.995529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.995785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.995794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.996139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.996473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.996485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.996873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.997088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.997097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.997472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.997678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.997687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.998035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.998405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.998415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.998771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.999107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.999117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.999280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.999543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:45.999552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:45.999899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.000257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.000266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.000592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.000929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.000937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.001087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.001556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.001565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.001898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.002261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.002271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.002745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.003086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.003094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.003439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.003777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.003785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.004153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.004486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.004496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.004846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.005186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.005196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.005413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.005848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.005858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.006202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.006421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.006431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.006779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.007163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.007173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.007523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.007772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.007782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.008000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.008385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.008394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.425 qpair failed and we were unable to recover it. 00:33:23.425 [2024-06-07 23:29:46.008569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.425 [2024-06-07 23:29:46.009000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.009009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.009339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.009563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.009572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.009991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.010327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.010336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.010404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.010725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.010734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.010883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.011087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.011096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.011412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.011769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.011778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.012103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.012461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.012470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.012834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.013194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.013203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.013541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.013905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.013915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.013975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.014195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.014204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.014553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.014757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.014766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.015102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.015464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.015473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.015691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.015754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.015763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.016089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.016348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.016358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.016741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.016990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.017000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.017386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.017746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.017755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.017964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.018351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.018361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.018719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.019079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.019087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.019428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.019643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.019652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.020016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.020389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.020398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.020726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.020914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.020923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.020980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.021260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.021270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.021587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.021960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.021969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.022314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.022525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.022534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.022899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.023251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.023261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.023323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.023665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.023674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.024025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.024359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.024368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.024736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.025095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.025103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.426 [2024-06-07 23:29:46.025406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.025761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.426 [2024-06-07 23:29:46.025770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.426 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.025959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.026271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.026280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.026625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.027000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.027009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.027406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.027758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.027766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.028114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.028318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.028329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.028562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.028726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.028736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.029065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.029284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.029293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.029645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.029878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.029888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.030223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.030560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.030570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.030957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.031316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.031326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.031589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.031977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.031986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.032400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.032580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.032590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.032923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.033308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.033319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.033703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.034039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.034049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.034264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.034654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.034664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.034857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.035228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.035237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.035587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.035787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.035797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.036152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.036365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.036375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.036752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.037136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.037145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.037449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.037673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.037682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.037870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.038308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.038729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.038935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.039304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.039536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.039545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.039905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.040135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.040144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.427 qpair failed and we were unable to recover it. 00:33:23.427 [2024-06-07 23:29:46.040500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.040900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.427 [2024-06-07 23:29:46.040909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.041254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.041472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.041481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.041829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.042031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.042041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.042362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.042749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.042758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.042972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.043208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.043218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.043582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.043947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.043956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.044287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.044650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.044659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.045001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.045090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.045099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.045271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.045602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.045611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.045965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.046252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.046262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.046612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.046815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.046824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.047199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.047412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.047421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.047619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.047932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.047942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.048329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.048419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.048428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.048682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.049033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.049042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.049370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.049601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.049610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.049816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.050190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.050198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.050532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.050883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.050892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.051084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.051428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.051437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.051788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.051954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.051963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.052203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.052517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.052527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.052749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.052999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.053008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.053383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.053773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.053782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.054131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.054478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.054488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.054678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.055000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.055009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.055382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.055664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.055674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.056007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.056235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.056251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.056461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.056575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.056584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.056902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.057150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.428 [2024-06-07 23:29:46.057158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.428 qpair failed and we were unable to recover it. 00:33:23.428 [2024-06-07 23:29:46.057369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.057642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.057651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.057900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.058057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.058068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.058414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.058793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.058802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.059053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.059283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.059292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.059649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.060057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.060066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.060284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.060682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.060691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.061045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.061438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.061448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.061790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.062147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.062156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.062514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.062878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.062887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.063076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.063453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.063463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.063822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.064184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.064194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.064560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.064795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.064804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.065139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.065356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.065366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.065699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.065885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.065894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.066249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.066613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.066622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.066810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.067186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.067196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.067549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.067891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.067900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.068203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.068473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.068483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.068815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.069204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.069214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.069570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.069789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.069799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.070042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.070429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.070438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.070856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.071062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.071071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.071298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.071653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.071662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.071876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.072258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.072268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.072689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.073058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.073067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.073404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.073787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.073796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.074152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.074299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.074308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.074639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.075023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.075032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.429 qpair failed and we were unable to recover it. 00:33:23.429 [2024-06-07 23:29:46.075398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.075786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.429 [2024-06-07 23:29:46.075795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.076145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.076361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.076370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.076734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.076953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.076962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.077208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.077568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.077578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.077909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.078298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.078306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.078525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.078900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.078909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.079322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.079553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.079562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.079780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.080327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.080917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.080984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.081301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.081661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.081671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.081891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.082235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.082256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.082621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.082966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.082975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.083175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.083559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.083568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.083928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.084104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.084114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.084450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.084636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.430 [2024-06-07 23:29:46.084644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.430 qpair failed and we were unable to recover it. 00:33:23.430 [2024-06-07 23:29:46.085062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.085515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.085526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.085756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.086123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.086132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.086391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.086624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.086634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.086987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.087307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.087317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.087551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.087755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.087764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.088105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.088307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.088315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.088659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.089027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.089036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.089372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.089698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.089706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.089927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.090111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.090122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.090568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.090927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.090936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.091294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.091723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.091731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.092068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.092428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.092437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.092745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.093429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.093753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.093963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.094249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.094587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.094596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.094939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.095291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.095300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.095704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.096043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.096051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.096408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.096766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.696 [2024-06-07 23:29:46.096780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.696 qpair failed and we were unable to recover it. 00:33:23.696 [2024-06-07 23:29:46.097120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.097451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.097461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.097816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.098188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.098197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.098576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.098966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.098976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.099186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.099536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.099546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.099850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.099995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.100004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.100311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.100641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.100650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.100854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.101113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.101123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.101488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.101803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.101812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.101915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.102156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.102165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.102520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.102785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.102794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.103024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.103332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.103341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.103708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.103933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.103941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.104272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.104632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.104642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.105019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.105492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.105503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.105709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.106088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.106097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.106302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.106685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.106694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.107109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.107422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.107432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.107788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.108004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.108023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 23:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:23.697 23:29:46 -- common/autotest_common.sh@852 -- # return 0 00:33:23.697 [2024-06-07 23:29:46.108362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 23:29:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:23.697 23:29:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:23.697 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.697 [2024-06-07 23:29:46.108726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.108748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.109176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.109572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.109583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.109777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.110145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.110153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.110507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.110856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.110867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.111199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.111534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.111544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.111896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.112273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.112284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.112640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.112855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.112865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.113220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.113561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.697 [2024-06-07 23:29:46.113571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.697 qpair failed and we were unable to recover it. 00:33:23.697 [2024-06-07 23:29:46.113999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.114239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.114258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.114600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.114860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.114869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.115121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.115305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.115314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.115655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.115986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.115998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.116201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.116534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.116544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.116896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.117239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.117259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.117614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.117831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.117840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.118200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.118444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.118454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.118839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.119208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.119217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.119694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.120023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.120032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.120443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.120783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.120792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.121177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.121388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.121398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.121617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.121959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.121969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.122331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.122605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.122618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.122957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.123162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.123171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.123548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.123888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.123898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.124291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.124649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.124658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.124885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.125190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.125199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.125586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.125788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.125797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.126188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.126415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.126426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.126811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.127157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.127167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.127541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.127920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.127929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.128312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.128642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.128651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.129009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.129217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.129227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.129604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.129936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.129946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.130296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.130538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.130547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.130904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.131125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.131134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.698 qpair failed and we were unable to recover it. 00:33:23.698 [2024-06-07 23:29:46.131484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.698 [2024-06-07 23:29:46.131844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.131853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.132058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.132450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.132460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.132833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.133177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.133186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.133529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.133821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.133830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.134184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.134537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.134546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.134750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.134948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.134957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.135314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.135673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.135683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.135876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.136234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.136251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.136571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.136815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.136825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.137144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.137330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.137340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.137756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.138086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.138096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.138457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.138827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.138836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.139066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.139409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.139419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.139751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.140107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.140116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.140455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.140784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.140793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.141119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.141391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.141401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.141709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.142055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.142064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.142360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.142587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.142596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.142789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.143013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.143022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.143377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.143735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.143744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.144074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.144279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.144289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.144616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.144994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.145003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.145218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.145453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.145463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.145781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.145992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.146002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.146197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.146383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.146392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 [2024-06-07 23:29:46.146759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 23:29:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.699 [2024-06-07 23:29:46.147096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.147107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.699 23:29:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:23.699 [2024-06-07 23:29:46.147451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.699 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.699 [2024-06-07 23:29:46.147862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.699 [2024-06-07 23:29:46.147875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.699 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.148214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.148579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.148589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.148940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.149011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.149020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.149368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.149741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.149750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.150028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.150257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.150268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.150487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.150650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.150659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.151025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.151367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.151377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.151705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.151973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.151982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.152312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.152689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.152699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.153021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.153371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.153382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.153691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.153936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.153945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.154283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.154654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.154664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.155015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.155362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.155371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.155698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.156017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.156026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.156396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.156765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.156774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.156996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.157361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.157371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.157613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.157848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.157857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.158200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.158536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.158546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.158885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.159189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.159198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.159434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.159643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.159652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.159967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.160345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.160354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.160816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.161049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.161059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.161265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.161498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.161507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.161715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.162051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.162060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.162444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.162821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.162829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 Malloc0 00:33:23.700 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.700 [2024-06-07 23:29:46.163199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 23:29:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:23.700 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.700 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.700 [2024-06-07 23:29:46.163572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.163592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.700 [2024-06-07 23:29:46.163952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.164313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.700 [2024-06-07 23:29:46.164323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.700 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.164674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.165048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.165058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.165266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.165490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.165499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.165753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.166104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.166114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.166493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.166561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.166573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.166611] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.701 [2024-06-07 23:29:46.166917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.167212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.167221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.167576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.167958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.167968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.168320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.168686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.168695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.169044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.169423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.169433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.169660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.169913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.169922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.170266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.170461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.170471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.170543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.170957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.170966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.171291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.171504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.171513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.171913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.172248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.172257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.172694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.172894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.172906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.173270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.173602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.173610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.173989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.174213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.174222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.174464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.174788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.174806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.701 23:29:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.701 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.701 [2024-06-07 23:29:46.175134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.701 [2024-06-07 23:29:46.175501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.175515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.175835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.176042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.176050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.176238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.176554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.176563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.176936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.177268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.177278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.701 [2024-06-07 23:29:46.177643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.177990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.701 [2024-06-07 23:29:46.177999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.701 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.178317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.178665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.178674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.178994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.179344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.179354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.179594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.179817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.179826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.179939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.180268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.180277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.180601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.180806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.180815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.181140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.181332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.181341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.181619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.181989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.181999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.182349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.182582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.182590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.702 [2024-06-07 23:29:46.182853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 23:29:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.702 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.702 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.702 [2024-06-07 23:29:46.183208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.183223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.183586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.183893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.183902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.184093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.184199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.184208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.184636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.184971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.184980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.185197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.185394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.185403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.185776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.185994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.186003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.186176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.186508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.186518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.186839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.187208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.187216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.187561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.187720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.187729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.188099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.188328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.188338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.188678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.188885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.188894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.189087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.189448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.189458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.189686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.189979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.189993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.190317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.190648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.190656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.702 23:29:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.702 [2024-06-07 23:29:46.190968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.702 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.702 [2024-06-07 23:29:46.191190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.191209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.191652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.191906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.191916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.192125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.192443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.192453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.192672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.192903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.192911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.702 [2024-06-07 23:29:46.193248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.193489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.702 [2024-06-07 23:29:46.193498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.702 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.193751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-06-07 23:29:46.194112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-06-07 23:29:46.194121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.194386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-06-07 23:29:46.194789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.703 [2024-06-07 23:29:46.194798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fdb0 with addr=10.0.0.2, port=4420 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.194830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.703 [2024-06-07 23:29:46.197209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.197299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.197322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.197330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.197337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.197357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.703 23:29:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:23.703 23:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:23.703 23:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:23.703 23:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:23.703 [2024-06-07 23:29:46.207181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.207304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 23:29:46 -- host/target_disconnect.sh@58 -- # wait 3057372 00:33:23.703 [2024-06-07 23:29:46.207328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.207342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.207349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.207369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.217229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.217361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.217378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.217385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.217391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.217406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.227204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.227278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.227294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.227301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.227307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.227321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.237198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.237294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.237314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.237321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.237327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.237341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.247223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.247289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.247305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.247312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.247318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.247332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.257262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.257331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.257346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.257353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.257359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.257372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.267266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.267333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.267348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.267355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.267361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.267374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.277304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.277375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.277390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.277397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.277403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.277420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.287213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.287291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.287309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.287316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.287322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.287337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.703 qpair failed and we were unable to recover it. 00:33:23.703 [2024-06-07 23:29:46.297230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.703 [2024-06-07 23:29:46.297296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.703 [2024-06-07 23:29:46.297312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.703 [2024-06-07 23:29:46.297319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.703 [2024-06-07 23:29:46.297325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.703 [2024-06-07 23:29:46.297339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.307352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.307417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.307432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.307438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.307444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.307458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.317401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.317470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.317485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.317492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.317498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.317512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.327457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.327516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.327536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.327543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.327549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.327563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.337549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.337617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.337633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.337639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.337645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.337659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.347398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.347464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.347479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.347486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.347492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.347505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.357572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.357648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.357663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.357669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.357675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.357689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.704 [2024-06-07 23:29:46.367591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.704 [2024-06-07 23:29:46.367658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.704 [2024-06-07 23:29:46.367674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.704 [2024-06-07 23:29:46.367680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.704 [2024-06-07 23:29:46.367686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.704 [2024-06-07 23:29:46.367703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.704 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.377596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.377666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.377683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.377689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.377696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.377709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.387591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.387656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.387672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.387679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.387684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.387698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.397618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.397687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.397702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.397709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.397714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.397728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.407651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.407717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.407732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.407739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.407745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.407758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.417687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.417755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.417773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.417780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.417786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.417799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.427699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.427766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.427781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.427788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.427794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.427807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.437755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.437835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.437850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.437856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.437862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.437875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.447803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.447868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.447883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.447889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.447895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.447909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.457834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.457911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.457936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.457944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.457955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.457973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.966 qpair failed and we were unable to recover it. 00:33:23.966 [2024-06-07 23:29:46.468044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.966 [2024-06-07 23:29:46.468130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.966 [2024-06-07 23:29:46.468155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.966 [2024-06-07 23:29:46.468163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.966 [2024-06-07 23:29:46.468169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.966 [2024-06-07 23:29:46.468188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.477929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.478039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.478056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.478063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.478069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.478083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.487957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.488029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.488046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.488052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.488058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.488072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.497870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.497965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.497980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.497987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.497993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.498007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.507940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.508019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.508035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.508041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.508047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.508061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.518058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.518175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.518190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.518196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.518202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.518216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.528000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.528067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.528082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.528089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.528095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.528108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.538043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.538110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.538125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.538131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.538138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.538151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.548053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.548122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.548137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.548143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.548154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.548168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.558082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.558157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.558172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.558179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.558185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.558198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.568109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.568180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.568195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.568202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.568208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.568221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.578140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.578210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.578224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.578231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.578237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.578255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.588072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.967 [2024-06-07 23:29:46.588140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.967 [2024-06-07 23:29:46.588158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.967 [2024-06-07 23:29:46.588167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.967 [2024-06-07 23:29:46.588173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.967 [2024-06-07 23:29:46.588188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.967 qpair failed and we were unable to recover it. 00:33:23.967 [2024-06-07 23:29:46.598192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.968 [2024-06-07 23:29:46.598271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.968 [2024-06-07 23:29:46.598288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.968 [2024-06-07 23:29:46.598295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.968 [2024-06-07 23:29:46.598301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.968 [2024-06-07 23:29:46.598315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.968 qpair failed and we were unable to recover it. 00:33:23.968 [2024-06-07 23:29:46.608233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.968 [2024-06-07 23:29:46.608307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.968 [2024-06-07 23:29:46.608323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.968 [2024-06-07 23:29:46.608329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.968 [2024-06-07 23:29:46.608336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.968 [2024-06-07 23:29:46.608349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.968 qpair failed and we were unable to recover it. 00:33:23.968 [2024-06-07 23:29:46.618257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.968 [2024-06-07 23:29:46.618323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.968 [2024-06-07 23:29:46.618338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.968 [2024-06-07 23:29:46.618345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.968 [2024-06-07 23:29:46.618351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.968 [2024-06-07 23:29:46.618364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.968 qpair failed and we were unable to recover it. 00:33:23.968 [2024-06-07 23:29:46.628351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.968 [2024-06-07 23:29:46.628414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.968 [2024-06-07 23:29:46.628429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.968 [2024-06-07 23:29:46.628435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.968 [2024-06-07 23:29:46.628441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.968 [2024-06-07 23:29:46.628455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.968 qpair failed and we were unable to recover it. 00:33:23.968 [2024-06-07 23:29:46.638363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.968 [2024-06-07 23:29:46.638442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.968 [2024-06-07 23:29:46.638456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.968 [2024-06-07 23:29:46.638463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.968 [2024-06-07 23:29:46.638473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:23.968 [2024-06-07 23:29:46.638487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.968 qpair failed and we were unable to recover it. 00:33:24.230 [2024-06-07 23:29:46.648392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.230 [2024-06-07 23:29:46.648459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.230 [2024-06-07 23:29:46.648474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.230 [2024-06-07 23:29:46.648481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.230 [2024-06-07 23:29:46.648487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.230 [2024-06-07 23:29:46.648500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.230 qpair failed and we were unable to recover it. 00:33:24.230 [2024-06-07 23:29:46.658270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.230 [2024-06-07 23:29:46.658336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.230 [2024-06-07 23:29:46.658353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.230 [2024-06-07 23:29:46.658360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.230 [2024-06-07 23:29:46.658366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.230 [2024-06-07 23:29:46.658380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.230 qpair failed and we were unable to recover it. 00:33:24.230 [2024-06-07 23:29:46.668410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.230 [2024-06-07 23:29:46.668480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.230 [2024-06-07 23:29:46.668496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.230 [2024-06-07 23:29:46.668502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.230 [2024-06-07 23:29:46.668509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.230 [2024-06-07 23:29:46.668523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.230 qpair failed and we were unable to recover it. 00:33:24.230 [2024-06-07 23:29:46.678443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.230 [2024-06-07 23:29:46.678515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.230 [2024-06-07 23:29:46.678530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.230 [2024-06-07 23:29:46.678537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.230 [2024-06-07 23:29:46.678543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.230 [2024-06-07 23:29:46.678556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.230 qpair failed and we were unable to recover it. 00:33:24.230 [2024-06-07 23:29:46.688464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.230 [2024-06-07 23:29:46.688531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.230 [2024-06-07 23:29:46.688546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.230 [2024-06-07 23:29:46.688552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.688559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.688572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.698492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.698613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.698629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.698636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.698642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.698655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.708536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.708602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.708617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.708623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.708629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.708643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.718752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.718827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.718842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.718848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.718854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.718867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.728592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.728669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.728684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.728691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.728700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.728714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.738485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.738555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.738570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.738576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.738582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.738595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.748633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.748700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.748715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.748722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.748728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.748742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.758665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.758731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.758746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.758752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.758758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.758772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.768672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.768738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.768753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.768759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.768765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.768779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.778712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.778784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.778799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.778806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.778812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.778825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.788738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.788806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.788821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.788828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.788834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.788847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.798751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.798857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.798873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.798880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.798887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.798900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.808822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.808934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.231 [2024-06-07 23:29:46.808950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.231 [2024-06-07 23:29:46.808956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.231 [2024-06-07 23:29:46.808963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.231 [2024-06-07 23:29:46.808976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.231 qpair failed and we were unable to recover it. 00:33:24.231 [2024-06-07 23:29:46.818857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.231 [2024-06-07 23:29:46.818973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.818989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.818999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.819005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.819019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.828896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.828972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.828997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.829005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.829013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.829032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.838933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.839024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.839050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.839058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.839064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.839083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.848924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.848992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.849009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.849016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.849022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.849037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.858954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.859028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.859052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.859060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.859066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.859085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.868962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.869030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.869047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.869054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.869060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.869075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.878992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.879070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.879095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.879103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.879109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.879128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.889087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.889155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.889172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.889179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.889185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.889200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.899000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.899068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.899084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.899090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.899096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.899110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.232 [2024-06-07 23:29:46.909072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.232 [2024-06-07 23:29:46.909141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.232 [2024-06-07 23:29:46.909156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.232 [2024-06-07 23:29:46.909167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.232 [2024-06-07 23:29:46.909173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.232 [2024-06-07 23:29:46.909187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.232 qpair failed and we were unable to recover it. 00:33:24.495 [2024-06-07 23:29:46.919108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.495 [2024-06-07 23:29:46.919205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.495 [2024-06-07 23:29:46.919220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.495 [2024-06-07 23:29:46.919227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.495 [2024-06-07 23:29:46.919233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.495 [2024-06-07 23:29:46.919255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.495 qpair failed and we were unable to recover it. 00:33:24.495 [2024-06-07 23:29:46.929124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.495 [2024-06-07 23:29:46.929198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.495 [2024-06-07 23:29:46.929212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.495 [2024-06-07 23:29:46.929219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.495 [2024-06-07 23:29:46.929225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.495 [2024-06-07 23:29:46.929239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.495 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.939073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.939138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.939153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.939159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.939165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.939178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.949201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.949277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.949292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.949299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.949305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.949319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.959239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.959314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.959330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.959336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.959342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.959355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.969262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.969329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.969344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.969350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.969356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.969369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.979340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.979405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.979420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.979426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.979432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.979445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.989332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.989401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.989416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.989422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.989429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.989442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:46.999355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:46.999429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:46.999445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:46.999455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:46.999461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:46.999475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.009367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.009434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.009449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.009456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.009462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.009475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.019408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.019470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.019486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.019493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.019499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.019512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.029475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.029539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.029554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.029561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.029567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.029580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.039498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.039570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.039586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.039593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.039599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.039612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.049385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.049512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.049527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.049534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.049540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.049553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.059529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.496 [2024-06-07 23:29:47.059590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.496 [2024-06-07 23:29:47.059606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.496 [2024-06-07 23:29:47.059613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.496 [2024-06-07 23:29:47.059619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.496 [2024-06-07 23:29:47.059632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.496 qpair failed and we were unable to recover it. 00:33:24.496 [2024-06-07 23:29:47.069571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.069637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.069652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.069658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.069664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.069678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.079610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.079702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.079717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.079723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.079729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.079743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.089658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.089724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.089739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.089749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.089755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.089768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.099663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.099765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.099780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.099787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.099793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.099806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.109636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.109699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.109714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.109720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.109726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.109739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.119694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.119765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.119780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.119787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.119793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.119806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.129740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.129837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.129852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.129859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.129865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.129878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.139749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.139814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.139829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.139836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.139842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.139855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.149776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.149842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.149857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.149864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.149870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.149883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.159843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.159960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.159975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.159982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.159988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.160001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.497 [2024-06-07 23:29:47.169836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.497 [2024-06-07 23:29:47.169904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.497 [2024-06-07 23:29:47.169919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.497 [2024-06-07 23:29:47.169926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.497 [2024-06-07 23:29:47.169932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.497 [2024-06-07 23:29:47.169945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.497 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.179873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.179937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.179956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.179962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.179968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.179982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.189789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.189855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.189870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.189877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.189882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.189896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.199938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.200007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.200022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.200028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.200034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.200047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.209947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.210020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.210045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.210053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.210059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.210078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.219992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.220067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.220091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.220099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.220106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.220124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.230025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.230108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.230124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.230130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.230137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.230151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.239991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.240063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.240079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.240085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.240092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.240105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.250068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.250141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.250157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.250163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.250169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.250183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.260092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.260154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.260169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.260175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.260182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.260195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.270132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.270201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.270219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.759 [2024-06-07 23:29:47.270225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.759 [2024-06-07 23:29:47.270232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.759 [2024-06-07 23:29:47.270250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.759 qpair failed and we were unable to recover it. 00:33:24.759 [2024-06-07 23:29:47.280161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.759 [2024-06-07 23:29:47.280231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.759 [2024-06-07 23:29:47.280250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.280257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.280265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.280278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.290236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.290305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.290320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.290327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.290333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.290347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.300219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.300322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.300337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.300344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.300350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.300363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.310259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.310341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.310356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.310363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.310369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.310386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.320265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.320337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.320352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.320359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.320365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.320378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.330289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.330366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.330384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.330391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.330397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.330412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.340339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.340413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.340430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.340436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.340442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.340456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.350339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.350404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.350419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.350426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.350432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.350446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.360393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.360480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.360498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.360505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.360511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.360524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.370395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.370464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.370479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.370486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.370492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.370505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.380486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.380589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.380604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.380610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.380616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.380629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.390471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.390536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.390551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.390558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.390563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.390577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.400514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.760 [2024-06-07 23:29:47.400583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.760 [2024-06-07 23:29:47.400597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.760 [2024-06-07 23:29:47.400604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.760 [2024-06-07 23:29:47.400610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.760 [2024-06-07 23:29:47.400626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.760 qpair failed and we were unable to recover it. 00:33:24.760 [2024-06-07 23:29:47.410604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.761 [2024-06-07 23:29:47.410670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.761 [2024-06-07 23:29:47.410685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.761 [2024-06-07 23:29:47.410691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.761 [2024-06-07 23:29:47.410697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.761 [2024-06-07 23:29:47.410710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.761 qpair failed and we were unable to recover it. 00:33:24.761 [2024-06-07 23:29:47.420550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.761 [2024-06-07 23:29:47.420616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.761 [2024-06-07 23:29:47.420632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.761 [2024-06-07 23:29:47.420639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.761 [2024-06-07 23:29:47.420645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.761 [2024-06-07 23:29:47.420658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.761 qpair failed and we were unable to recover it. 00:33:24.761 [2024-06-07 23:29:47.430558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.761 [2024-06-07 23:29:47.430621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.761 [2024-06-07 23:29:47.430636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.761 [2024-06-07 23:29:47.430643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.761 [2024-06-07 23:29:47.430649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:24.761 [2024-06-07 23:29:47.430662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.761 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.440623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.440699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.440715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.440721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.440727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.440741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.450632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.450693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.450711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.450718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.450724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.450737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.460607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.460678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.460693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.460700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.460706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.460719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.470670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.470787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.470802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.470809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.470815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.470829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.480600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.480679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.480695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.480701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.480707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.480721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.490676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.490773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.490789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.490795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.490801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.490818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.500753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.500821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.500836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.500843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.500849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.500862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.510784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.510847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.510862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.510868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.510874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.510887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.520804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.520906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.520921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.520927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.520933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.520946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.530848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.530921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.530945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.530953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.530960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.530978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.540944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.541015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.541053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.541062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.541068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.022 [2024-06-07 23:29:47.541088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.022 qpair failed and we were unable to recover it. 00:33:25.022 [2024-06-07 23:29:47.550890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.022 [2024-06-07 23:29:47.550965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.022 [2024-06-07 23:29:47.550989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.022 [2024-06-07 23:29:47.550997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.022 [2024-06-07 23:29:47.551003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.551022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.560918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.560993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.561010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.561017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.561023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.561038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.570831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.570893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.570908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.570915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.570921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.570935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.580973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.581057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.581072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.581079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.581085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.581102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.591053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.591168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.591183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.591190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.591196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.591210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.601049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.601120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.601136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.601142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.601149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.601162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.611068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.611130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.611145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.611151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.611157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.611171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.621095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.621162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.621178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.621184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.621190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.621203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.631022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.631093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.631116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.631122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.631129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.631142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.641155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.641224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.641239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.641251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.641257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.641270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.651176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.651245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.651260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.651266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.651272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.651286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.661216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.661286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.661301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.661308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.661314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.661327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.671166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.671267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.671282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.671289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.671298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.671312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.681270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.023 [2024-06-07 23:29:47.681336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.023 [2024-06-07 23:29:47.681351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.023 [2024-06-07 23:29:47.681358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.023 [2024-06-07 23:29:47.681364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.023 [2024-06-07 23:29:47.681377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.023 qpair failed and we were unable to recover it. 00:33:25.023 [2024-06-07 23:29:47.691229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.024 [2024-06-07 23:29:47.691329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.024 [2024-06-07 23:29:47.691345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.024 [2024-06-07 23:29:47.691352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.024 [2024-06-07 23:29:47.691358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.024 [2024-06-07 23:29:47.691371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.024 qpair failed and we were unable to recover it. 00:33:25.024 [2024-06-07 23:29:47.701405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.024 [2024-06-07 23:29:47.701473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.024 [2024-06-07 23:29:47.701488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.024 [2024-06-07 23:29:47.701495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.024 [2024-06-07 23:29:47.701501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.024 [2024-06-07 23:29:47.701514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.024 qpair failed and we were unable to recover it. 00:33:25.285 [2024-06-07 23:29:47.711244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.285 [2024-06-07 23:29:47.711314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.285 [2024-06-07 23:29:47.711329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.285 [2024-06-07 23:29:47.711335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.285 [2024-06-07 23:29:47.711342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.285 [2024-06-07 23:29:47.711355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.285 qpair failed and we were unable to recover it. 00:33:25.285 [2024-06-07 23:29:47.721382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.285 [2024-06-07 23:29:47.721471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.285 [2024-06-07 23:29:47.721486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.285 [2024-06-07 23:29:47.721493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.285 [2024-06-07 23:29:47.721499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.285 [2024-06-07 23:29:47.721512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.285 qpair failed and we were unable to recover it. 00:33:25.285 [2024-06-07 23:29:47.731417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.285 [2024-06-07 23:29:47.731483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.285 [2024-06-07 23:29:47.731498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.731504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.731511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.731524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.741429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.741490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.741505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.741511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.741517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.741531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.751471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.751533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.751548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.751554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.751560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.751573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.761506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.761575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.761590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.761597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.761606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.761619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.771406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.771479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.771493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.771500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.771506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.771519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.781590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.781659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.781674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.781681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.781687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.781700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.791576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.791643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.791658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.791665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.791671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.791684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.801602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.801672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.801688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.801695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.801701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.801713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.811702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.811810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.811825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.811832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.811838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.811851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.821653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.821715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.821729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.821736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.821742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.821755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.831685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.831779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.831795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.831801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.831807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.286 [2024-06-07 23:29:47.831821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.286 qpair failed and we were unable to recover it. 00:33:25.286 [2024-06-07 23:29:47.841752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.286 [2024-06-07 23:29:47.841823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.286 [2024-06-07 23:29:47.841838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.286 [2024-06-07 23:29:47.841844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.286 [2024-06-07 23:29:47.841850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.841864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.851769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.851853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.851867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.851874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.851884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.851897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.861644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.861706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.861721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.861727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.861733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.861746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.871787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.871854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.871868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.871875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.871881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.871894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.881857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.881923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.881937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.881944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.881950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.881963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.891836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.891899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.891914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.891921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.891927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.891941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.901890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.901966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.901982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.901989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.901995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.902010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.911804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.911872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.911887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.911894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.911900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.911914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.921951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.922021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.922036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.922042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.922049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.922062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.931964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.932044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.932068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.932076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.932083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.932102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.942025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.942105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.942130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.942138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.942149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.942169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.952110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.952178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.952195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.952202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.952208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.952223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.287 [2024-06-07 23:29:47.962156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.287 [2024-06-07 23:29:47.962235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.287 [2024-06-07 23:29:47.962255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.287 [2024-06-07 23:29:47.962262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.287 [2024-06-07 23:29:47.962268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.287 [2024-06-07 23:29:47.962282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.287 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:47.972083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:47.972151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:47.972166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:47.972173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:47.972179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:47.972193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:47.982121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:47.982217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:47.982233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:47.982240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:47.982252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:47.982266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:47.992217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:47.992314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:47.992330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:47.992337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:47.992343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:47.992357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:48.002197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:48.002287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:48.002303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:48.002309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:48.002315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:48.002329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:48.012203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:48.012276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:48.012291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:48.012298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:48.012304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:48.012317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:48.022133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:48.022199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:48.022214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:48.022220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:48.022226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:48.022240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:48.032304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:48.032372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.549 [2024-06-07 23:29:48.032388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.549 [2024-06-07 23:29:48.032398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.549 [2024-06-07 23:29:48.032404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.549 [2024-06-07 23:29:48.032418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.549 qpair failed and we were unable to recover it. 00:33:25.549 [2024-06-07 23:29:48.042192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.549 [2024-06-07 23:29:48.042262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.042280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.042286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.042292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.042307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.052210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.052277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.052293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.052299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.052305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.052319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.062339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.062405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.062421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.062428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.062434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.062448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.072404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.072473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.072488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.072495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.072501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.072515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.082420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.082502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.082518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.082525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.082530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.082544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.092354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.092424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.092440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.092446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.092452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.092466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.102392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.102486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.102501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.102508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.102514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.102528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.112429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.112522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.112537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.112543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.112549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.112562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.122539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.122628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.122643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.122654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.122660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.122673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.132583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.132681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.132695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.132702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.132708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.132721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.142605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.142671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.142686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.142692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.142698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.142711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.152674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.152744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.152759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.152766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.152771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.152785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.162628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.162696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.162712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.162718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.550 [2024-06-07 23:29:48.162724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.550 [2024-06-07 23:29:48.162737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.550 qpair failed and we were unable to recover it. 00:33:25.550 [2024-06-07 23:29:48.172672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.550 [2024-06-07 23:29:48.172733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.550 [2024-06-07 23:29:48.172748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.550 [2024-06-07 23:29:48.172755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.172761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.172774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.551 [2024-06-07 23:29:48.182697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.551 [2024-06-07 23:29:48.182770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.551 [2024-06-07 23:29:48.182785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.551 [2024-06-07 23:29:48.182792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.182798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.182811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.551 [2024-06-07 23:29:48.192722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.551 [2024-06-07 23:29:48.192790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.551 [2024-06-07 23:29:48.192805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.551 [2024-06-07 23:29:48.192812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.192818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.192831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.551 [2024-06-07 23:29:48.202767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.551 [2024-06-07 23:29:48.202841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.551 [2024-06-07 23:29:48.202857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.551 [2024-06-07 23:29:48.202863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.202869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.202883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.551 [2024-06-07 23:29:48.212810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.551 [2024-06-07 23:29:48.212887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.551 [2024-06-07 23:29:48.212902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.551 [2024-06-07 23:29:48.212913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.212919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.212933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.551 [2024-06-07 23:29:48.222839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.551 [2024-06-07 23:29:48.222900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.551 [2024-06-07 23:29:48.222916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.551 [2024-06-07 23:29:48.222922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.551 [2024-06-07 23:29:48.222928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.551 [2024-06-07 23:29:48.222941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.551 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.232864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.232930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.232945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.232951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.232958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.232971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.242885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.242956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.242971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.242978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.242984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.242997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.252908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.252984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.253000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.253006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.253012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.253026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.262943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.263007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.263023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.263029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.263035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.263048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.273040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.273150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.273165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.273172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.273178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.273191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.282997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.283069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.283085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.283091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.283097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.283110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.293058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.293167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.293183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.293189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.293195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.293209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.302945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.303014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.303031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.303041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.303047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.813 [2024-06-07 23:29:48.303061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.813 qpair failed and we were unable to recover it. 00:33:25.813 [2024-06-07 23:29:48.313160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.813 [2024-06-07 23:29:48.313272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.813 [2024-06-07 23:29:48.313288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.813 [2024-06-07 23:29:48.313295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.813 [2024-06-07 23:29:48.313301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.313315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.322991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.323064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.323080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.323087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.323093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.323108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.333119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.333179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.333194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.333201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.333206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.333220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.343070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.343135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.343153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.343159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.343165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.343179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.353153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.353223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.353238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.353250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.353256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.353270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.363255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.363324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.363340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.363346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.363352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.363365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.373233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.373343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.373358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.373364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.373370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.373384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.383287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.383346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.383361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.383367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.383373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.383387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.393208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.393279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.393298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.393305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.393311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.393324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.403356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.403425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.403440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.403447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.403453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.403466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.413332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.413408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.413423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.413430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.413436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.413449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.423396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.423462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.423477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.423484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.423490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.423503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.433435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.433503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.433518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.433524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.433530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.433543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.443530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.814 [2024-06-07 23:29:48.443596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.814 [2024-06-07 23:29:48.443610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.814 [2024-06-07 23:29:48.443617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.814 [2024-06-07 23:29:48.443623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.814 [2024-06-07 23:29:48.443636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.814 qpair failed and we were unable to recover it. 00:33:25.814 [2024-06-07 23:29:48.453452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.815 [2024-06-07 23:29:48.453516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.815 [2024-06-07 23:29:48.453531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.815 [2024-06-07 23:29:48.453537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.815 [2024-06-07 23:29:48.453543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.815 [2024-06-07 23:29:48.453556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.815 qpair failed and we were unable to recover it. 00:33:25.815 [2024-06-07 23:29:48.463512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.815 [2024-06-07 23:29:48.463582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.815 [2024-06-07 23:29:48.463599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.815 [2024-06-07 23:29:48.463605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.815 [2024-06-07 23:29:48.463611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.815 [2024-06-07 23:29:48.463625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.815 qpair failed and we were unable to recover it. 00:33:25.815 [2024-06-07 23:29:48.473542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.815 [2024-06-07 23:29:48.473609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.815 [2024-06-07 23:29:48.473624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.815 [2024-06-07 23:29:48.473631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.815 [2024-06-07 23:29:48.473637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.815 [2024-06-07 23:29:48.473650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.815 qpair failed and we were unable to recover it. 00:33:25.815 [2024-06-07 23:29:48.483586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.815 [2024-06-07 23:29:48.483694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.815 [2024-06-07 23:29:48.483712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.815 [2024-06-07 23:29:48.483719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.815 [2024-06-07 23:29:48.483725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:25.815 [2024-06-07 23:29:48.483739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.815 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.493554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.493627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.076 [2024-06-07 23:29:48.493642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.076 [2024-06-07 23:29:48.493648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.076 [2024-06-07 23:29:48.493654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.076 [2024-06-07 23:29:48.493667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.076 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.503626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.503690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.076 [2024-06-07 23:29:48.503705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.076 [2024-06-07 23:29:48.503711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.076 [2024-06-07 23:29:48.503717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.076 [2024-06-07 23:29:48.503730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.076 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.513657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.513721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.076 [2024-06-07 23:29:48.513736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.076 [2024-06-07 23:29:48.513742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.076 [2024-06-07 23:29:48.513748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.076 [2024-06-07 23:29:48.513761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.076 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.523690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.523759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.076 [2024-06-07 23:29:48.523774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.076 [2024-06-07 23:29:48.523780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.076 [2024-06-07 23:29:48.523786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.076 [2024-06-07 23:29:48.523803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.076 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.533650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.533708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.076 [2024-06-07 23:29:48.533723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.076 [2024-06-07 23:29:48.533729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.076 [2024-06-07 23:29:48.533735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.076 [2024-06-07 23:29:48.533748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.076 qpair failed and we were unable to recover it. 00:33:26.076 [2024-06-07 23:29:48.543731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.076 [2024-06-07 23:29:48.543797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.543812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.543818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.543824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.543838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.553770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.553838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.553853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.553859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.553865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.553878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.563767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.563840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.563855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.563861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.563867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.563880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.573642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.573717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.573735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.573742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.573748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.573761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.583836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.583900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.583914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.583921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.583927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.583941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.593873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.593939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.593954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.593961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.593967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.593980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.603880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.603951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.603966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.603972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.603978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.603992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.613868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.613933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.613948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.613954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.613960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.613977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.623944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.624010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.624024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.624031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.624037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.624051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.633851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.633918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.633933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.633939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.633945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.633958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.643996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.644067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.644082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.644089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.644095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.644108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.653972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.654038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.654053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.654059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.654065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.654078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.664048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.664111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.664131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.664137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.077 [2024-06-07 23:29:48.664143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.077 [2024-06-07 23:29:48.664156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.077 qpair failed and we were unable to recover it. 00:33:26.077 [2024-06-07 23:29:48.674090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.077 [2024-06-07 23:29:48.674170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.077 [2024-06-07 23:29:48.674185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.077 [2024-06-07 23:29:48.674192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.674197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.674210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.684116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.684185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.684200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.684207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.684213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.684226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.694072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.694134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.694149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.694155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.694161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.694174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.704104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.704171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.704186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.704192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.704198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.704215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.714246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.714314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.714329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.714336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.714342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.714355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.724215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.724290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.724306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.724312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.724318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.724332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.734071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.734135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.734149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.734156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.734162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.734175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.744281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.744343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.744358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.744365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.744371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.744384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.078 [2024-06-07 23:29:48.754264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.078 [2024-06-07 23:29:48.754327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.078 [2024-06-07 23:29:48.754345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.078 [2024-06-07 23:29:48.754352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.078 [2024-06-07 23:29:48.754358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.078 [2024-06-07 23:29:48.754371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.078 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.764320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.764384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.764399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.764406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.764412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.764425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.774294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.774355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.774370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.774376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.774382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.774396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.784391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.784469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.784483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.784490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.784496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.784509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.794406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.794472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.794487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.794494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.794499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.794516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.804438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.804506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.804520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.804527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.804533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.804546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.814339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.814401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.814416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.814422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.814428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.814441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.824407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.824473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.824489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.824495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.824501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.824515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.834401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.834525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.834541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.834547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.834554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.834567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.844573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.844642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.844660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.844667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.844673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.844686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.854536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.854596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.854611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.854617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.854623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.854636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.864583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.864647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.864662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.864669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.864674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.864688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.874633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.874723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.341 [2024-06-07 23:29:48.874739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.341 [2024-06-07 23:29:48.874746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.341 [2024-06-07 23:29:48.874752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.341 [2024-06-07 23:29:48.874765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.341 qpair failed and we were unable to recover it. 00:33:26.341 [2024-06-07 23:29:48.884663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.341 [2024-06-07 23:29:48.884736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.884751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.884758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.884767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.884781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.894647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.894704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.894719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.894726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.894732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.894745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.904706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.904771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.904786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.904792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.904798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.904811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.914692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.914798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.914813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.914820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.914826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.914839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.924770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.924877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.924892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.924898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.924904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.924917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.934642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.934712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.934728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.934734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.934740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.934753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.944833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.944902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.944917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.944924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.944930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.944943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.954858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.954933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.954957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.954965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.954972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.954990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.964827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.964903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.964927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.964935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.964941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.964960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.974900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.974993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.975017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.975025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.975036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.975055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.984942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.985009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.985025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.985031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.985038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.985052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:48.994863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:48.994936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:48.994952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:48.994959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:48.994965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:48.994979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:49.004997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:49.005067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:49.005082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:49.005089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:49.005095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.342 [2024-06-07 23:29:49.005108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.342 qpair failed and we were unable to recover it. 00:33:26.342 [2024-06-07 23:29:49.014986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.342 [2024-06-07 23:29:49.015049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.342 [2024-06-07 23:29:49.015066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.342 [2024-06-07 23:29:49.015076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.342 [2024-06-07 23:29:49.015082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.343 [2024-06-07 23:29:49.015096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.343 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.025077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.025142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.025158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.025164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.025170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.025184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.035061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.035121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.035136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.035143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.035149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.035162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.045183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.045290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.045305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.045312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.045318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.045332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.055107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.055268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.055285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.055292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.055298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.055313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.065159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.065221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.065236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.065247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.065258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.065271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.075158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.075215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.075230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.075237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.075247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.075262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.085212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.085334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.085351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.085358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.085364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.085378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.095201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.095264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.095280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.095286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.095292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.095306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.105287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.606 [2024-06-07 23:29:49.105355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.606 [2024-06-07 23:29:49.105370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.606 [2024-06-07 23:29:49.105376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.606 [2024-06-07 23:29:49.105382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.606 [2024-06-07 23:29:49.105395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.606 qpair failed and we were unable to recover it. 00:33:26.606 [2024-06-07 23:29:49.115139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.115207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.115223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.115229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.115235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.115252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.125317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.125384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.125400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.125406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.125412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.125425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.135304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.135371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.135387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.135393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.135399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.135412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.145338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.145397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.145411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.145418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.145424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.145437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.155382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.155440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.155455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.155462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.155471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.155485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.165325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.165389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.165404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.165410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.165416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.165429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.175307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.175363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.175378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.175385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.175391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.175404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.185498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.185558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.185573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.185579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.185585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.185598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.195483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.195547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.195562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.195568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.195574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.195587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.205533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.205603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.205618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.205624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.205630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.205643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.215531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.215589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.215604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.215611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.215617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.215630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.225620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.225686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.225701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.225707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.225713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.225726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.607 qpair failed and we were unable to recover it. 00:33:26.607 [2024-06-07 23:29:49.235591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.607 [2024-06-07 23:29:49.235648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.607 [2024-06-07 23:29:49.235663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.607 [2024-06-07 23:29:49.235670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.607 [2024-06-07 23:29:49.235676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.607 [2024-06-07 23:29:49.235689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.608 qpair failed and we were unable to recover it. 00:33:26.608 [2024-06-07 23:29:49.245620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.608 [2024-06-07 23:29:49.245685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.608 [2024-06-07 23:29:49.245700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.608 [2024-06-07 23:29:49.245713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.608 [2024-06-07 23:29:49.245719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.608 [2024-06-07 23:29:49.245732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.608 qpair failed and we were unable to recover it. 00:33:26.608 [2024-06-07 23:29:49.255615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.608 [2024-06-07 23:29:49.255670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.608 [2024-06-07 23:29:49.255685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.608 [2024-06-07 23:29:49.255691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.608 [2024-06-07 23:29:49.255697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.608 [2024-06-07 23:29:49.255710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.608 qpair failed and we were unable to recover it. 00:33:26.608 [2024-06-07 23:29:49.265716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.608 [2024-06-07 23:29:49.265781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.608 [2024-06-07 23:29:49.265796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.608 [2024-06-07 23:29:49.265802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.608 [2024-06-07 23:29:49.265808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.608 [2024-06-07 23:29:49.265821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.608 qpair failed and we were unable to recover it. 00:33:26.608 [2024-06-07 23:29:49.275708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.608 [2024-06-07 23:29:49.275767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.608 [2024-06-07 23:29:49.275782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.608 [2024-06-07 23:29:49.275789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.608 [2024-06-07 23:29:49.275795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.608 [2024-06-07 23:29:49.275808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.608 qpair failed and we were unable to recover it. 00:33:26.871 [2024-06-07 23:29:49.285754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-06-07 23:29:49.285820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-06-07 23:29:49.285835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-06-07 23:29:49.285842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-06-07 23:29:49.285848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.871 [2024-06-07 23:29:49.285861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-06-07 23:29:49.295764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-06-07 23:29:49.295824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-06-07 23:29:49.295840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-06-07 23:29:49.295847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-06-07 23:29:49.295853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.871 [2024-06-07 23:29:49.295866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.871 qpair failed and we were unable to recover it. 00:33:26.871 [2024-06-07 23:29:49.305868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.871 [2024-06-07 23:29:49.305937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.871 [2024-06-07 23:29:49.305953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.871 [2024-06-07 23:29:49.305960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.871 [2024-06-07 23:29:49.305966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.871 [2024-06-07 23:29:49.305980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.315825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.315883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.315898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.315905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.315910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.315923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.325838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.325900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.325915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.325922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.325928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.325941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.335862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.335924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.335939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.335950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.335956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.335969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.345915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.345973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.345988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.345994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.346001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.346014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.355909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.356006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.356021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.356028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.356034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.356047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.366020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.366083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.366098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.366104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.366110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.366124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.376028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.376083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.376098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.376105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.376111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.376124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.386052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.386112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.386127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.386134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.386139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.386153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.396047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.396102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.396117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.396124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.396130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.396143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.405951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.406031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.406048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.406057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.406063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.406077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.416108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.416167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.416183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.416189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.416195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.416208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.426061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.426131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.426147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.426158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.426164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.426178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.436190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.436293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.436309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.872 [2024-06-07 23:29:49.436316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.872 [2024-06-07 23:29:49.436322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.872 [2024-06-07 23:29:49.436335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.872 qpair failed and we were unable to recover it. 00:33:26.872 [2024-06-07 23:29:49.446193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.872 [2024-06-07 23:29:49.446262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.872 [2024-06-07 23:29:49.446277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.446284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.446290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.446303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.456216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.456320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.456335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.456342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.456347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.456361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.466282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.466342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.466357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.466364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.466370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.466383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.476270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.476326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.476340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.476347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.476353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.476366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.486301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.486375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.486390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.486397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.486403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.486416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.496197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.496264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.496281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.496288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.496294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.496308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.506386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.506453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.506468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.506475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.506481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.506494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.516255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.516324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.516339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.516349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.516355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.516368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.527081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.527153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.527168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.527175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.527180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.527194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.536434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.536491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.536506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.536512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.536518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.536531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:26.873 [2024-06-07 23:29:49.546337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.873 [2024-06-07 23:29:49.546396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.873 [2024-06-07 23:29:49.546410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.873 [2024-06-07 23:29:49.546417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.873 [2024-06-07 23:29:49.546423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:26.873 [2024-06-07 23:29:49.546436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.873 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.556525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.556617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.556632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.556638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.556644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.556657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.566513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.566574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.566589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.566596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.566601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.566615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.576530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.576629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.576644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.576651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.576657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.576670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.586559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.586617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.586632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.586639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.586645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.586658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.596576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.596634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.596650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.596656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.596662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.596675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.606621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.606704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.606722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.606729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.606735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.606748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.616640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.616694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.616709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.616715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.616721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.616734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.626641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.626707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.626721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.626728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.626734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.626747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.636701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.636787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.636801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.636808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.636813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.636826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.646784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.646853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.646868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.646875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.646881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.646894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.656753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.656813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.656828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.656834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.656840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.656853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.666749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.666807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.666822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.666828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.666835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.136 [2024-06-07 23:29:49.666847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.136 qpair failed and we were unable to recover it. 00:33:27.136 [2024-06-07 23:29:49.676824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.136 [2024-06-07 23:29:49.676881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.136 [2024-06-07 23:29:49.676896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.136 [2024-06-07 23:29:49.676903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.136 [2024-06-07 23:29:49.676909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.676922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.686830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.686891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.686906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.686913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.686919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.686932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.696848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.696916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.696934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.696941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.696947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.696960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.706884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.706984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.707008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.707016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.707023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.707042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.716937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.717040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.717057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.717064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.717070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.717085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.726938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.727005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.727029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.727037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.727043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.727062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.736972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.737068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.737084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.737091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.737097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.737116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.746995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.747081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.747097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.747103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.747109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.747124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.756930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.756987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.757003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.757009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.757015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.757029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.766989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.767050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.767066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.767072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.767078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.767092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.777142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.777198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.777213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.777220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.777226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.777239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.787124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.787186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.787205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.787212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.787218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.787231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.797157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.797215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.797230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.797236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.797249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.797262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.137 [2024-06-07 23:29:49.807160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.137 [2024-06-07 23:29:49.807223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.137 [2024-06-07 23:29:49.807238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.137 [2024-06-07 23:29:49.807249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.137 [2024-06-07 23:29:49.807256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.137 [2024-06-07 23:29:49.807270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.137 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.817203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.817263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.817278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.817285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.817291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.817304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.827221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.827285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.827301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.827307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.827313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.827330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.837295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.837375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.837390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.837397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.837403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.837416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.847303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.847370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.847385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.847391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.847397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.847411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.857312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.857377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.857392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.857399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.857405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.857418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.867337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.867398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.867413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.867420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.867426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.867439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.877390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.877449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.877468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.877474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.877480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.877494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.887431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.887500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.887515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.887521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.400 [2024-06-07 23:29:49.887527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.400 [2024-06-07 23:29:49.887541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.400 qpair failed and we were unable to recover it. 00:33:27.400 [2024-06-07 23:29:49.897401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.400 [2024-06-07 23:29:49.897465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.400 [2024-06-07 23:29:49.897480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.400 [2024-06-07 23:29:49.897487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.897493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.897506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.907449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.907514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.907529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.907535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.907541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.907555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.917425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.917483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.917499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.917505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.917511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.917533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.927505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.927566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.927581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.927588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.927594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.927607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.937406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.937467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.937483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.937489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.937496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.937509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.947552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.947615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.947629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.947636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.947642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.947655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.957595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.957654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.957669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.957675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.957681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.957694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.967617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.967681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.967700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.967706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.967712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.967726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.977648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.977709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.977724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.977730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.977736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.977749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.987648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.987707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.987722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.987729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.987735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.987748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:49.997691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:49.997805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:49.997820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:49.997827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:49.997833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:49.997846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:50.007713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:50.007782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:50.007798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:50.007805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:50.007811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:50.007829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:50.017659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:50.017737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:50.017752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.401 [2024-06-07 23:29:50.017759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.401 [2024-06-07 23:29:50.017766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.401 [2024-06-07 23:29:50.017779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.401 qpair failed and we were unable to recover it. 00:33:27.401 [2024-06-07 23:29:50.027772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.401 [2024-06-07 23:29:50.027832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.401 [2024-06-07 23:29:50.027847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.027854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.027861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.027874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.402 [2024-06-07 23:29:50.037831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.402 [2024-06-07 23:29:50.037932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.402 [2024-06-07 23:29:50.037949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.037956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.037963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.037978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.402 [2024-06-07 23:29:50.047851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.402 [2024-06-07 23:29:50.047920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.402 [2024-06-07 23:29:50.047944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.047952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.047959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.047979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.402 [2024-06-07 23:29:50.057733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.402 [2024-06-07 23:29:50.057835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.402 [2024-06-07 23:29:50.057864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.057873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.057879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.057898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.402 [2024-06-07 23:29:50.067874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.402 [2024-06-07 23:29:50.067937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.402 [2024-06-07 23:29:50.067954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.067961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.067967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.067982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.402 [2024-06-07 23:29:50.077863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.402 [2024-06-07 23:29:50.077928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.402 [2024-06-07 23:29:50.077944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.402 [2024-06-07 23:29:50.077951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.402 [2024-06-07 23:29:50.077958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.402 [2024-06-07 23:29:50.077972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.402 qpair failed and we were unable to recover it. 00:33:27.664 [2024-06-07 23:29:50.087970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.664 [2024-06-07 23:29:50.088035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.664 [2024-06-07 23:29:50.088050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.664 [2024-06-07 23:29:50.088058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.664 [2024-06-07 23:29:50.088064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.664 [2024-06-07 23:29:50.088078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.664 qpair failed and we were unable to recover it. 00:33:27.664 [2024-06-07 23:29:50.098040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.664 [2024-06-07 23:29:50.098145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.664 [2024-06-07 23:29:50.098162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.664 [2024-06-07 23:29:50.098170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.664 [2024-06-07 23:29:50.098180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.664 [2024-06-07 23:29:50.098195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.664 qpair failed and we were unable to recover it. 00:33:27.664 [2024-06-07 23:29:50.107990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.664 [2024-06-07 23:29:50.108076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.664 [2024-06-07 23:29:50.108092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.664 [2024-06-07 23:29:50.108098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.664 [2024-06-07 23:29:50.108105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.664 [2024-06-07 23:29:50.108118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.664 qpair failed and we were unable to recover it. 00:33:27.664 [2024-06-07 23:29:50.118019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.664 [2024-06-07 23:29:50.118077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.664 [2024-06-07 23:29:50.118092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.118099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.118106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.118119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.128032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.128097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.128114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.128121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.128127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.128141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.138064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.138118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.138133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.138140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.138146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.138159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.148002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.148066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.148081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.148088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.148094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.148108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.158140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.158203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.158218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.158225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.158231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.158250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.168146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.168208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.168223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.168230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.168236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.168255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.178251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.178311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.178326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.178332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.178338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.178352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.188070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.188129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.188145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.188151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.188161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.188175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.198213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.198279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.198295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.198302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.198308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.198322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.208272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.208335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.208350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.208357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.208363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.208376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.218267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.218330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.218345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.218351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.218358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.218371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.228331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.228389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.228404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.228411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.228416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.228430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.238331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.665 [2024-06-07 23:29:50.238391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.665 [2024-06-07 23:29:50.238406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.665 [2024-06-07 23:29:50.238413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.665 [2024-06-07 23:29:50.238419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.665 [2024-06-07 23:29:50.238432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.665 qpair failed and we were unable to recover it. 00:33:27.665 [2024-06-07 23:29:50.248394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.248493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.248508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.248514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.248520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.248534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.258381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.258482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.258498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.258504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.258510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.258524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.268417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.268473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.268488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.268494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.268500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.268514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.278448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.278507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.278523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.278529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.278539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.278553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.288485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.288546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.288561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.288568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.288574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.288587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.298512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.298566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.298580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.298587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.298593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.298606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.308591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.308701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.308715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.308722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.308728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.308741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.318442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.318503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.318519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.318525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.318531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.318545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.328567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.328633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.328649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.328656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.328662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.328676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.666 [2024-06-07 23:29:50.338608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.666 [2024-06-07 23:29:50.338665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.666 [2024-06-07 23:29:50.338680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.666 [2024-06-07 23:29:50.338687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.666 [2024-06-07 23:29:50.338693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.666 [2024-06-07 23:29:50.338706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.666 qpair failed and we were unable to recover it. 00:33:27.928 [2024-06-07 23:29:50.348647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.928 [2024-06-07 23:29:50.348707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.928 [2024-06-07 23:29:50.348722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.928 [2024-06-07 23:29:50.348729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.928 [2024-06-07 23:29:50.348735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.928 [2024-06-07 23:29:50.348748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.928 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.358663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.358727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.358741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.358748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.358754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.358767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.368698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.368763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.368778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.368785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.368795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.368808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.378736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.378793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.378808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.378815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.378821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.378835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.388794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.388872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.388888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.388895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.388901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.388915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.398772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.398829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.398844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.398851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.398857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.398870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.408807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.408868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.408883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.408890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.408896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.408909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.418696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.418754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.418769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.418775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.418781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.418795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.428875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.428935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.428950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.428957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.428963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.428976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.438889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.438953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.438977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.438985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.438991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.439011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.448914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.448983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.449001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.449008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.449014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.449029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.458967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.459088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.459113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.459125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.459132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.459151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.929 qpair failed and we were unable to recover it. 00:33:27.929 [2024-06-07 23:29:50.468974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.929 [2024-06-07 23:29:50.469086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.929 [2024-06-07 23:29:50.469111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.929 [2024-06-07 23:29:50.469119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.929 [2024-06-07 23:29:50.469126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.929 [2024-06-07 23:29:50.469144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.478986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.479045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.479062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.479069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.479075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.479090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.489046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.489110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.489134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.489142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.489149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.489167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.499044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.499098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.499115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.499122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.499128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.499142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.509060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.509117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.509133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.509140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.509146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.509160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.519141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.519240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.519259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.519265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.519272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.519286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.529127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.529191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.529206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.529213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.529219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.529232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.539155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.539224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.539240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.539249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.539256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.539270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.549173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.549229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.549247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.549257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.549264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.549277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.559200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.559265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.559280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.559287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.559293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.559306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.569246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.569307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.569322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.569328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.930 [2024-06-07 23:29:50.569334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.930 [2024-06-07 23:29:50.569347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.930 qpair failed and we were unable to recover it. 00:33:27.930 [2024-06-07 23:29:50.579261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.930 [2024-06-07 23:29:50.579374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.930 [2024-06-07 23:29:50.579389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.930 [2024-06-07 23:29:50.579396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.931 [2024-06-07 23:29:50.579402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.931 [2024-06-07 23:29:50.579415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.931 qpair failed and we were unable to recover it. 00:33:27.931 [2024-06-07 23:29:50.589308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.931 [2024-06-07 23:29:50.589367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.931 [2024-06-07 23:29:50.589382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.931 [2024-06-07 23:29:50.589389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.931 [2024-06-07 23:29:50.589394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.931 [2024-06-07 23:29:50.589408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.931 qpair failed and we were unable to recover it. 00:33:27.931 [2024-06-07 23:29:50.599362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:27.931 [2024-06-07 23:29:50.599423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:27.931 [2024-06-07 23:29:50.599438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:27.931 [2024-06-07 23:29:50.599445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:27.931 [2024-06-07 23:29:50.599451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:27.931 [2024-06-07 23:29:50.599464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:27.931 qpair failed and we were unable to recover it. 00:33:28.193 [2024-06-07 23:29:50.609377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.193 [2024-06-07 23:29:50.609437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.193 [2024-06-07 23:29:50.609452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.193 [2024-06-07 23:29:50.609459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.193 [2024-06-07 23:29:50.609465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.193 [2024-06-07 23:29:50.609478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.193 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.619385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.619439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.619454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.619461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.619467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.619480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.629383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.629443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.629459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.629465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.629471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.629484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.639378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.639437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.639452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.639462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.639468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.639481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.649471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.649532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.649547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.649553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.649559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.649572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.659428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.659488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.659503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.659510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.659516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.659529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.669385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.669455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.669470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.669476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.669483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.669495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.679554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.679627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.679642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.679648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.679654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.679667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.689626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.689688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.689705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.689711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.689721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.689735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.699585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.699645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.699661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.699668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.699674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.699687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.709597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.709656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.709671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.709678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.709684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.709697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.719628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.719695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.719711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.719717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.719723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.719736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.729622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.729681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.729698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.729711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.729717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.729731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.739583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.194 [2024-06-07 23:29:50.739639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.194 [2024-06-07 23:29:50.739654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.194 [2024-06-07 23:29:50.739660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.194 [2024-06-07 23:29:50.739667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.194 [2024-06-07 23:29:50.739680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.194 qpair failed and we were unable to recover it. 00:33:28.194 [2024-06-07 23:29:50.749724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.749791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.749807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.749814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.749820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.749834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.759764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.759828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.759843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.759849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.759855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.759868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.769765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.769833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.769847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.769854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.769860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.769873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.779813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.779913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.779928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.779934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.779940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.779953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.789818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.789873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.789889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.789895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.789901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.789914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.799879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.799939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.799954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.799961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.799967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.799980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.809896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.809958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.809973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.809979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.809985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.809998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.819789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.819853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.819871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.819877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.819883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.819897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.829907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.829973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.829997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.830005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.830011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.830030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.839998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.840065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.840089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.840097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.840103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.840122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.849980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.850056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.850080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.850088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.850094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.850112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.860009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.860104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.860120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.860127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.860133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.860147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.195 [2024-06-07 23:29:50.870041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.195 [2024-06-07 23:29:50.870107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.195 [2024-06-07 23:29:50.870123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.195 [2024-06-07 23:29:50.870130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.195 [2024-06-07 23:29:50.870136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.195 [2024-06-07 23:29:50.870149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.195 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.880036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.880093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.880108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.880115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.880121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.880135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.890126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.890187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.890203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.890210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.890216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.890229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.900207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.900270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.900285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.900292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.900298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.900311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.910159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.910256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.910275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.910282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.910288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.910301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.920165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.920235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.920254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.920261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.920267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.920281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.930174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.930286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.930302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.930309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.930315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.930328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.940227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.940289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.940304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.940311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.940316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.940330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.950292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.950348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.950362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.950369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.950375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.950392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.960290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.960348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.960364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.960371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.960377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.960391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-06-07 23:29:50.970358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.458 [2024-06-07 23:29:50.970417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.458 [2024-06-07 23:29:50.970432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.458 [2024-06-07 23:29:50.970439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.458 [2024-06-07 23:29:50.970444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.458 [2024-06-07 23:29:50.970458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:50.980381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:50.980485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:50.980499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:50.980506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:50.980512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:50.980525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:50.990380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:50.990441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:50.990456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:50.990463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:50.990469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:50.990482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.000421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.000483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.000502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.000509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.000515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.000528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.010432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.010529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.010544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.010551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.010557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.010570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.020467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.020524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.020539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.020545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.020551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.020565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.030392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.030453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.030468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.030474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.030480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.030493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.040540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.040597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.040612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.040618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.040624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.040641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.050578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.050683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.050698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.050705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.050710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.050723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.060567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.060627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.060643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.060650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.060656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.060670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.070488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.070545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.070560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.070566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.070572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.070585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.080636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.080692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.080706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.080713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.080719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.080732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.090542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.090606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.090626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.090633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.090639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.459 [2024-06-07 23:29:51.090653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.459 qpair failed and we were unable to recover it. 00:33:28.459 [2024-06-07 23:29:51.100595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.459 [2024-06-07 23:29:51.100652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.459 [2024-06-07 23:29:51.100667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.459 [2024-06-07 23:29:51.100674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.459 [2024-06-07 23:29:51.100680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.460 [2024-06-07 23:29:51.100693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.460 qpair failed and we were unable to recover it. 00:33:28.460 [2024-06-07 23:29:51.110720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.460 [2024-06-07 23:29:51.110776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.460 [2024-06-07 23:29:51.110791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.460 [2024-06-07 23:29:51.110798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.460 [2024-06-07 23:29:51.110804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.460 [2024-06-07 23:29:51.110817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.460 qpair failed and we were unable to recover it. 00:33:28.460 [2024-06-07 23:29:51.120737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.460 [2024-06-07 23:29:51.120794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.460 [2024-06-07 23:29:51.120808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.460 [2024-06-07 23:29:51.120815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.460 [2024-06-07 23:29:51.120821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.460 [2024-06-07 23:29:51.120834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.460 qpair failed and we were unable to recover it. 00:33:28.460 [2024-06-07 23:29:51.130755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.460 [2024-06-07 23:29:51.130857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.460 [2024-06-07 23:29:51.130872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.460 [2024-06-07 23:29:51.130879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.460 [2024-06-07 23:29:51.130885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.460 [2024-06-07 23:29:51.130901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.460 qpair failed and we were unable to recover it. 00:33:28.722 [2024-06-07 23:29:51.140776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.722 [2024-06-07 23:29:51.140832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.722 [2024-06-07 23:29:51.140847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.722 [2024-06-07 23:29:51.140854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.722 [2024-06-07 23:29:51.140860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.722 [2024-06-07 23:29:51.140873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.722 qpair failed and we were unable to recover it. 00:33:28.722 [2024-06-07 23:29:51.150813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.722 [2024-06-07 23:29:51.150871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.722 [2024-06-07 23:29:51.150886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.722 [2024-06-07 23:29:51.150893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.722 [2024-06-07 23:29:51.150899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.722 [2024-06-07 23:29:51.150912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.722 qpair failed and we were unable to recover it. 00:33:28.722 [2024-06-07 23:29:51.160831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.160890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.160906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.160912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.160918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.160932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.170878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.170946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.170971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.170979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.170985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.171003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.180777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.180849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.180869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.180876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.180883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.180897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.190924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.190987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.191004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.191010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.191016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.191030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.200985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.201055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.201079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.201087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.201093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.201111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.211002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.211091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.211108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.211115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.211121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.211135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.221002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.221096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.221112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.221118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.221124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.221142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.231000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.231063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.231078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.231085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.231091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.231105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.241062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.241158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.241173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.241179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.241186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.241199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.250981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.251043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.251058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.251064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.251070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.251084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.261098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.261155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.261171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.261177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.261183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.261197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.271135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.271202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.271221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.271227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.271233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.271253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.281170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.723 [2024-06-07 23:29:51.281229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.723 [2024-06-07 23:29:51.281249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.723 [2024-06-07 23:29:51.281256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.723 [2024-06-07 23:29:51.281262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.723 [2024-06-07 23:29:51.281275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.723 qpair failed and we were unable to recover it. 00:33:28.723 [2024-06-07 23:29:51.291188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.291258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.291273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.291280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.291286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.291300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.301216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.301279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.301295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.301301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.301307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.301320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.311272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.311332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.311346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.311353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.311362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.311376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.321285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.321375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.321390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.321397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.321403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.321416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.331352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.331416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.331431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.331438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.331444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.331457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.341385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.341446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.341461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.341467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.341473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.341487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.351405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.351461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.351476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.351482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.351489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.351504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.361307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.361367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.361382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.361388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.361394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.361408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.371303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.371363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.371379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.371385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.371391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.371405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.381484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.381539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.381554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.381560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.381566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.381580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.391498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.391573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.391589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.391595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.391601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.391614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.724 [2024-06-07 23:29:51.401493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.724 [2024-06-07 23:29:51.401550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.724 [2024-06-07 23:29:51.401565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.724 [2024-06-07 23:29:51.401572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.724 [2024-06-07 23:29:51.401581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.724 [2024-06-07 23:29:51.401595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.724 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.411527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.411584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.411599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.411605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.411611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.411624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.421517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.421576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.421591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.421597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.421603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.421616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.431577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.431677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.431691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.431698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.431704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.431717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.441567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.441627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.441641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.441648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.441654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.441667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.451690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.451754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.451769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.451776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.451782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.451796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.461674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.461733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.461747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.461754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.461760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.461773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.471702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.471763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.471778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.471785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.471791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.471804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.481717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.481775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.481790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.481796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.481802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.481816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.491758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.491843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.987 [2024-06-07 23:29:51.491858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.987 [2024-06-07 23:29:51.491866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.987 [2024-06-07 23:29:51.491879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.987 [2024-06-07 23:29:51.491893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.987 qpair failed and we were unable to recover it. 00:33:28.987 [2024-06-07 23:29:51.501660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.987 [2024-06-07 23:29:51.501725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.501739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.501746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.501752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.988 [2024-06-07 23:29:51.501765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-06-07 23:29:51.511807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.988 [2024-06-07 23:29:51.511892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.511907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.511914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.511920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.988 [2024-06-07 23:29:51.511933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-06-07 23:29:51.521832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.988 [2024-06-07 23:29:51.521892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.521907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.521913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.521919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdb0 00:33:28.988 [2024-06-07 23:29:51.521933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-06-07 23:29:51.522328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203d8a0 is same with the state(5) to be set 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 [2024-06-07 23:29:51.522716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.988 [2024-06-07 23:29:51.531855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.988 [2024-06-07 23:29:51.531912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.531928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.531935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.531940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66bc000b90 00:33:28.988 [2024-06-07 23:29:51.531952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 [2024-06-07 23:29:51.541775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.988 [2024-06-07 23:29:51.541830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.541842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.541847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.541851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66bc000b90 00:33:28.988 [2024-06-07 23:29:51.541862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Write completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 Read completed with error (sct=0, sc=8) 00:33:28.988 starting I/O failed 00:33:28.988 [2024-06-07 23:29:51.542682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.988 [2024-06-07 23:29:51.551948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.988 [2024-06-07 23:29:51.552089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.988 [2024-06-07 23:29:51.552137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.988 [2024-06-07 23:29:51.552159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.988 [2024-06-07 23:29:51.552177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66b4000b90 00:33:28.988 [2024-06-07 23:29:51.552222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.988 qpair failed and we were unable to recover it. 00:33:28.989 [2024-06-07 23:29:51.561941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.989 [2024-06-07 23:29:51.562045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.989 [2024-06-07 23:29:51.562078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.989 [2024-06-07 23:29:51.562094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.989 [2024-06-07 23:29:51.562109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66b4000b90 00:33:28.989 [2024-06-07 23:29:51.562142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Read completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 Write completed with error (sct=0, sc=8) 00:33:28.989 starting I/O failed 00:33:28.989 [2024-06-07 23:29:51.562957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.989 [2024-06-07 23:29:51.572172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.989 [2024-06-07 23:29:51.572321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.989 [2024-06-07 23:29:51.572384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.989 [2024-06-07 23:29:51.572406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.989 [2024-06-07 23:29:51.572426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66c4000b90 00:33:28.989 [2024-06-07 23:29:51.572478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-06-07 23:29:51.582088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:28.989 [2024-06-07 23:29:51.582199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:28.989 [2024-06-07 23:29:51.582233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:28.989 [2024-06-07 23:29:51.582257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:28.989 [2024-06-07 23:29:51.582272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f66c4000b90 00:33:28.989 [2024-06-07 23:29:51.582307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.989 qpair failed and we were unable to recover it. 00:33:28.989 [2024-06-07 23:29:51.582636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d8a0 (9): Bad file descriptor 00:33:28.989 Initializing NVMe Controllers 00:33:28.989 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:28.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:28.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:28.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:28.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:28.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:28.989 Initialization complete. Launching workers. 00:33:28.989 Starting thread on core 1 00:33:28.989 Starting thread on core 2 00:33:28.989 Starting thread on core 3 00:33:28.989 Starting thread on core 0 00:33:28.989 23:29:51 -- host/target_disconnect.sh@59 -- # sync 00:33:28.989 00:33:28.989 real 0m11.283s 00:33:28.989 user 0m21.073s 00:33:28.989 sys 0m3.843s 00:33:28.989 23:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.989 23:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:28.989 ************************************ 00:33:28.989 END TEST nvmf_target_disconnect_tc2 00:33:28.989 ************************************ 00:33:28.989 23:29:51 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:28.989 23:29:51 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:28.989 23:29:51 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:28.989 23:29:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:28.989 23:29:51 -- nvmf/common.sh@116 -- # sync 00:33:28.989 23:29:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:28.989 23:29:51 -- nvmf/common.sh@119 -- # set +e 00:33:28.989 23:29:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:28.989 23:29:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:28.989 rmmod nvme_tcp 00:33:28.989 rmmod nvme_fabrics 00:33:29.251 rmmod nvme_keyring 00:33:29.251 23:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:29.251 23:29:51 -- nvmf/common.sh@123 -- # set -e 00:33:29.251 23:29:51 -- nvmf/common.sh@124 -- # return 0 00:33:29.251 23:29:51 -- nvmf/common.sh@477 -- # '[' -n 3058189 ']' 00:33:29.251 23:29:51 -- nvmf/common.sh@478 -- # killprocess 3058189 00:33:29.251 23:29:51 -- common/autotest_common.sh@926 -- # '[' -z 3058189 ']' 00:33:29.251 23:29:51 -- common/autotest_common.sh@930 -- # kill -0 3058189 00:33:29.251 23:29:51 -- common/autotest_common.sh@931 -- # uname 00:33:29.251 23:29:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:29.251 23:29:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3058189 00:33:29.251 23:29:51 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:29.251 23:29:51 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:29.251 23:29:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3058189' 00:33:29.251 killing process with pid 3058189 00:33:29.251 23:29:51 -- common/autotest_common.sh@945 -- # kill 3058189 00:33:29.251 23:29:51 -- common/autotest_common.sh@950 -- # wait 3058189 00:33:29.251 23:29:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:29.251 23:29:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:29.251 23:29:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:29.251 23:29:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.251 23:29:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:29.251 23:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.251 23:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.251 23:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.798 23:29:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:31.798 00:33:31.798 real 0m21.084s 00:33:31.798 user 0m48.621s 00:33:31.798 sys 0m9.438s 00:33:31.798 23:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.798 23:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:31.798 ************************************ 00:33:31.798 END TEST nvmf_target_disconnect 00:33:31.798 ************************************ 00:33:31.798 23:29:53 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:31.798 23:29:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:31.798 23:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:31.798 23:29:53 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:31.798 00:33:31.798 real 25m51.143s 00:33:31.798 user 68m58.815s 00:33:31.798 sys 7m6.384s 00:33:31.798 23:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.798 23:29:53 -- common/autotest_common.sh@10 -- # set +x 00:33:31.798 ************************************ 00:33:31.798 END TEST nvmf_tcp 00:33:31.798 ************************************ 00:33:31.798 23:29:54 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:31.798 23:29:54 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:31.798 23:29:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:31.798 23:29:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.798 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:31.798 ************************************ 00:33:31.798 START TEST spdkcli_nvmf_tcp 00:33:31.798 ************************************ 00:33:31.798 23:29:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:31.798 * Looking for test storage... 00:33:31.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:31.798 23:29:54 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:31.798 23:29:54 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.798 23:29:54 -- nvmf/common.sh@7 -- # uname -s 00:33:31.798 23:29:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.798 23:29:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.798 23:29:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.798 23:29:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.798 23:29:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.798 23:29:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.798 23:29:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.798 23:29:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.798 23:29:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.798 23:29:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.798 23:29:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:31.798 23:29:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:31.798 23:29:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.798 23:29:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.798 23:29:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.798 23:29:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.798 23:29:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.798 23:29:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.798 23:29:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.798 23:29:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.798 23:29:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.798 23:29:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.798 23:29:54 -- paths/export.sh@5 -- # export PATH 00:33:31.798 23:29:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.798 23:29:54 -- nvmf/common.sh@46 -- # : 0 00:33:31.798 23:29:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:31.798 23:29:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:31.798 23:29:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:31.798 23:29:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.798 23:29:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.798 23:29:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:31.798 23:29:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:31.798 23:29:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:31.798 23:29:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:31.798 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:31.798 23:29:54 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:31.798 23:29:54 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3060021 00:33:31.798 23:29:54 -- spdkcli/common.sh@34 -- # waitforlisten 3060021 00:33:31.798 23:29:54 -- common/autotest_common.sh@819 -- # '[' -z 3060021 ']' 00:33:31.798 23:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.798 23:29:54 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:31.798 23:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:31.798 23:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.798 23:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:31.798 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:31.799 [2024-06-07 23:29:54.224328] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:31.799 [2024-06-07 23:29:54.224399] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060021 ] 00:33:31.799 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.799 [2024-06-07 23:29:54.289194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:31.799 [2024-06-07 23:29:54.326508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:31.799 [2024-06-07 23:29:54.326789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.799 [2024-06-07 23:29:54.326791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.370 23:29:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:32.370 23:29:54 -- common/autotest_common.sh@852 -- # return 0 00:33:32.370 23:29:54 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:32.370 23:29:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:32.370 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:33:32.370 23:29:55 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:32.370 23:29:55 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:32.370 23:29:55 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:32.370 23:29:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:32.370 23:29:55 -- common/autotest_common.sh@10 -- # set +x 00:33:32.370 23:29:55 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:32.370 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:32.370 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:32.370 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:32.370 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:32.370 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:32.370 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:32.370 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.370 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.370 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:32.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:32.370 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:32.370 ' 00:33:32.939 [2024-06-07 23:29:55.347106] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:34.854 [2024-06-07 23:29:57.353019] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.240 [2024-06-07 23:29:58.516837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:38.151 [2024-06-07 23:30:00.655654] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:40.061 [2024-06-07 23:30:02.489231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:41.443 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:41.443 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:41.443 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:41.443 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:41.443 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:41.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:41.443 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:41.443 23:30:04 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:41.443 23:30:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:41.443 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:33:41.443 23:30:04 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:41.443 23:30:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:41.443 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:33:41.443 23:30:04 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:41.443 23:30:04 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:42.015 23:30:04 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:42.015 23:30:04 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:42.015 23:30:04 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:42.015 23:30:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:42.015 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:33:42.015 23:30:04 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:42.015 23:30:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:42.015 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:33:42.015 23:30:04 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:42.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:42.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:42.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:42.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:42.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:42.015 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:42.015 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:42.015 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:42.015 ' 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:47.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:47.302 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:47.302 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:47.302 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:47.302 23:30:09 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:47.302 23:30:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:47.302 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:33:47.302 23:30:09 -- spdkcli/nvmf.sh@90 -- # killprocess 3060021 00:33:47.302 23:30:09 -- common/autotest_common.sh@926 -- # '[' -z 3060021 ']' 00:33:47.302 23:30:09 -- common/autotest_common.sh@930 -- # kill -0 3060021 00:33:47.302 23:30:09 -- common/autotest_common.sh@931 -- # uname 00:33:47.302 23:30:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:47.302 23:30:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3060021 00:33:47.302 23:30:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:47.302 23:30:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:47.302 23:30:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3060021' 00:33:47.302 killing process with pid 3060021 00:33:47.302 23:30:09 -- common/autotest_common.sh@945 -- # kill 3060021 00:33:47.302 [2024-06-07 23:30:09.854342] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:47.302 23:30:09 -- common/autotest_common.sh@950 -- # wait 3060021 00:33:47.302 23:30:09 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:47.302 23:30:09 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:47.302 23:30:09 -- spdkcli/common.sh@13 -- # '[' -n 3060021 ']' 00:33:47.302 23:30:09 -- spdkcli/common.sh@14 -- # killprocess 3060021 00:33:47.302 23:30:09 -- common/autotest_common.sh@926 -- # '[' -z 3060021 ']' 00:33:47.302 23:30:09 -- common/autotest_common.sh@930 -- # kill -0 3060021 00:33:47.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3060021) - No such process 00:33:47.302 23:30:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3060021 is not found' 00:33:47.302 Process with pid 3060021 is not found 00:33:47.302 23:30:09 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:47.302 23:30:09 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:47.302 23:30:09 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:47.302 00:33:47.302 real 0m15.931s 00:33:47.302 user 0m33.358s 00:33:47.302 sys 0m0.730s 00:33:47.302 23:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.302 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:33:47.302 ************************************ 00:33:47.302 END TEST spdkcli_nvmf_tcp 00:33:47.302 ************************************ 00:33:47.564 23:30:10 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:47.564 23:30:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:47.564 23:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:47.564 23:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:47.564 ************************************ 00:33:47.564 START TEST nvmf_identify_passthru 00:33:47.564 ************************************ 00:33:47.564 23:30:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:47.564 * Looking for test storage... 00:33:47.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.564 23:30:10 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.564 23:30:10 -- nvmf/common.sh@7 -- # uname -s 00:33:47.564 23:30:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.564 23:30:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.564 23:30:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.564 23:30:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.564 23:30:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.564 23:30:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.564 23:30:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.564 23:30:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.564 23:30:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.564 23:30:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.564 23:30:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:47.564 23:30:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:47.564 23:30:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.564 23:30:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.564 23:30:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.564 23:30:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.564 23:30:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.564 23:30:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.564 23:30:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.564 23:30:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.564 23:30:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.564 23:30:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.564 23:30:10 -- paths/export.sh@5 -- # export PATH 00:33:47.565 23:30:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.565 23:30:10 -- nvmf/common.sh@46 -- # : 0 00:33:47.565 23:30:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:47.565 23:30:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:47.565 23:30:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:47.565 23:30:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.565 23:30:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.565 23:30:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:47.565 23:30:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:47.565 23:30:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:47.565 23:30:10 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.565 23:30:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.565 23:30:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.565 23:30:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.565 23:30:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.565 23:30:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.565 23:30:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.565 23:30:10 -- paths/export.sh@5 -- # export PATH 00:33:47.565 23:30:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.565 23:30:10 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:47.565 23:30:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:47.565 23:30:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.565 23:30:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:47.565 23:30:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:47.565 23:30:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:47.565 23:30:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.565 23:30:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:47.565 23:30:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.565 23:30:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:47.565 23:30:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:47.565 23:30:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:47.565 23:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:55.710 23:30:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:55.710 23:30:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:55.710 23:30:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:55.710 23:30:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:55.710 23:30:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:55.710 23:30:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:55.710 23:30:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:55.710 23:30:17 -- nvmf/common.sh@294 -- # net_devs=() 00:33:55.710 23:30:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:55.710 23:30:17 -- nvmf/common.sh@295 -- # e810=() 00:33:55.710 23:30:17 -- nvmf/common.sh@295 -- # local -ga e810 00:33:55.710 23:30:17 -- nvmf/common.sh@296 -- # x722=() 00:33:55.710 23:30:17 -- nvmf/common.sh@296 -- # local -ga x722 00:33:55.710 23:30:17 -- nvmf/common.sh@297 -- # mlx=() 00:33:55.710 23:30:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:55.710 23:30:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.710 23:30:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:55.710 23:30:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:55.710 23:30:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:55.710 23:30:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:55.710 23:30:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:55.710 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:55.710 23:30:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:55.710 23:30:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:55.710 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:55.710 23:30:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:55.710 23:30:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:55.710 23:30:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:55.710 23:30:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.710 23:30:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:55.710 23:30:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.710 23:30:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:55.710 Found net devices under 0000:31:00.0: cvl_0_0 00:33:55.710 23:30:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.710 23:30:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:55.710 23:30:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.710 23:30:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:55.710 23:30:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.710 23:30:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:55.710 Found net devices under 0000:31:00.1: cvl_0_1 00:33:55.710 23:30:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.710 23:30:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:55.710 23:30:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:55.710 23:30:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:55.711 23:30:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:55.711 23:30:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:55.711 23:30:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.711 23:30:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.711 23:30:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.711 23:30:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:55.711 23:30:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.711 23:30:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.711 23:30:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:55.711 23:30:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.711 23:30:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.711 23:30:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:55.711 23:30:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:55.711 23:30:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.711 23:30:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.711 23:30:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.711 23:30:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.711 23:30:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:55.711 23:30:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.711 23:30:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.711 23:30:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.711 23:30:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:55.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:33:55.711 00:33:55.711 --- 10.0.0.2 ping statistics --- 00:33:55.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.711 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:33:55.711 23:30:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:33:55.711 00:33:55.711 --- 10.0.0.1 ping statistics --- 00:33:55.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.711 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:33:55.711 23:30:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.711 23:30:17 -- nvmf/common.sh@410 -- # return 0 00:33:55.711 23:30:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:55.711 23:30:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.711 23:30:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:55.711 23:30:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:55.711 23:30:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.711 23:30:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:55.711 23:30:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:55.711 23:30:17 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:55.711 23:30:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:55.711 23:30:17 -- common/autotest_common.sh@10 -- # set +x 00:33:55.711 23:30:17 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:55.711 23:30:17 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:55.711 23:30:17 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:55.711 23:30:17 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:55.711 23:30:17 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:55.711 23:30:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:55.711 23:30:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:55.711 23:30:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:55.711 23:30:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:55.711 23:30:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:55.711 23:30:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:55.711 23:30:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:55.711 23:30:17 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:33:55.711 23:30:17 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:55.711 23:30:17 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:55.711 23:30:17 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:55.711 23:30:17 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:55.711 23:30:17 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:55.711 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.711 23:30:17 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:33:55.711 23:30:17 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:55.711 23:30:17 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:55.711 23:30:17 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:55.711 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.971 23:30:18 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:33:55.971 23:30:18 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:55.971 23:30:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:55.971 23:30:18 -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 23:30:18 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:55.971 23:30:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:55.971 23:30:18 -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 23:30:18 -- target/identify_passthru.sh@31 -- # nvmfpid=3067459 00:33:55.971 23:30:18 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:55.971 23:30:18 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:55.971 23:30:18 -- target/identify_passthru.sh@35 -- # waitforlisten 3067459 00:33:55.971 23:30:18 -- common/autotest_common.sh@819 -- # '[' -z 3067459 ']' 00:33:55.971 23:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.971 23:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:55.971 23:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.971 23:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:55.971 23:30:18 -- common/autotest_common.sh@10 -- # set +x 00:33:55.971 [2024-06-07 23:30:18.530605] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:55.971 [2024-06-07 23:30:18.530663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.971 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.972 [2024-06-07 23:30:18.600090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.972 [2024-06-07 23:30:18.633072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:55.972 [2024-06-07 23:30:18.633213] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.972 [2024-06-07 23:30:18.633223] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.972 [2024-06-07 23:30:18.633230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.972 [2024-06-07 23:30:18.633308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.972 [2024-06-07 23:30:18.633435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.972 [2024-06-07 23:30:18.633598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.972 [2024-06-07 23:30:18.633598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:56.913 23:30:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:56.913 23:30:19 -- common/autotest_common.sh@852 -- # return 0 00:33:56.913 23:30:19 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:56.913 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.913 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:56.913 INFO: Log level set to 20 00:33:56.913 INFO: Requests: 00:33:56.913 { 00:33:56.913 "jsonrpc": "2.0", 00:33:56.913 "method": "nvmf_set_config", 00:33:56.913 "id": 1, 00:33:56.913 "params": { 00:33:56.913 "admin_cmd_passthru": { 00:33:56.913 "identify_ctrlr": true 00:33:56.913 } 00:33:56.913 } 00:33:56.913 } 00:33:56.913 00:33:56.913 INFO: response: 00:33:56.913 { 00:33:56.913 "jsonrpc": "2.0", 00:33:56.913 "id": 1, 00:33:56.913 "result": true 00:33:56.913 } 00:33:56.913 00:33:56.913 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.913 23:30:19 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:56.913 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.913 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:56.913 INFO: Setting log level to 20 00:33:56.913 INFO: Setting log level to 20 00:33:56.913 INFO: Log level set to 20 00:33:56.913 INFO: Log level set to 20 00:33:56.913 INFO: Requests: 00:33:56.913 { 00:33:56.913 "jsonrpc": "2.0", 00:33:56.913 "method": "framework_start_init", 00:33:56.913 "id": 1 00:33:56.913 } 00:33:56.913 00:33:56.913 INFO: Requests: 00:33:56.913 { 00:33:56.913 "jsonrpc": "2.0", 00:33:56.913 "method": "framework_start_init", 00:33:56.913 "id": 1 00:33:56.913 } 00:33:56.913 00:33:56.913 [2024-06-07 23:30:19.360969] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:56.913 INFO: response: 00:33:56.914 { 00:33:56.914 "jsonrpc": "2.0", 00:33:56.914 "id": 1, 00:33:56.914 "result": true 00:33:56.914 } 00:33:56.914 00:33:56.914 INFO: response: 00:33:56.914 { 00:33:56.914 "jsonrpc": "2.0", 00:33:56.914 "id": 1, 00:33:56.914 "result": true 00:33:56.914 } 00:33:56.914 00:33:56.914 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.914 23:30:19 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.914 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.914 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:56.914 INFO: Setting log level to 40 00:33:56.914 INFO: Setting log level to 40 00:33:56.914 INFO: Setting log level to 40 00:33:56.914 [2024-06-07 23:30:19.374231] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.914 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:56.914 23:30:19 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:56.914 23:30:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:56.914 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:56.914 23:30:19 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:56.914 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:56.914 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.173 Nvme0n1 00:33:57.173 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.173 23:30:19 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:57.173 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.173 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.173 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.173 23:30:19 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:57.173 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.173 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.173 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.173 23:30:19 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:57.173 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.173 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.173 [2024-06-07 23:30:19.756564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.173 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.173 23:30:19 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:57.173 23:30:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.173 23:30:19 -- common/autotest_common.sh@10 -- # set +x 00:33:57.173 [2024-06-07 23:30:19.768336] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:57.173 [ 00:33:57.173 { 00:33:57.173 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:57.173 "subtype": "Discovery", 00:33:57.173 "listen_addresses": [], 00:33:57.173 "allow_any_host": true, 00:33:57.173 "hosts": [] 00:33:57.173 }, 00:33:57.173 { 00:33:57.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.173 "subtype": "NVMe", 00:33:57.173 "listen_addresses": [ 00:33:57.173 { 00:33:57.173 "transport": "TCP", 00:33:57.173 "trtype": "TCP", 00:33:57.173 "adrfam": "IPv4", 00:33:57.173 "traddr": "10.0.0.2", 00:33:57.173 "trsvcid": "4420" 00:33:57.173 } 00:33:57.173 ], 00:33:57.173 "allow_any_host": true, 00:33:57.173 "hosts": [], 00:33:57.173 "serial_number": "SPDK00000000000001", 00:33:57.173 "model_number": "SPDK bdev Controller", 00:33:57.173 "max_namespaces": 1, 00:33:57.173 "min_cntlid": 1, 00:33:57.173 "max_cntlid": 65519, 00:33:57.173 "namespaces": [ 00:33:57.173 { 00:33:57.173 "nsid": 1, 00:33:57.173 "bdev_name": "Nvme0n1", 00:33:57.173 "name": "Nvme0n1", 00:33:57.173 "nguid": "36344730526054940025384500000027", 00:33:57.173 "uuid": "36344730-5260-5494-0025-384500000027" 00:33:57.173 } 00:33:57.173 ] 00:33:57.173 } 00:33:57.173 ] 00:33:57.173 23:30:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.173 23:30:19 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:57.173 23:30:19 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:57.173 23:30:19 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:57.173 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.471 23:30:19 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:33:57.471 23:30:19 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:57.471 23:30:19 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:57.471 23:30:19 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:57.471 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.471 23:30:20 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:33:57.471 23:30:20 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:33:57.471 23:30:20 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:33:57.471 23:30:20 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:57.471 23:30:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.471 23:30:20 -- common/autotest_common.sh@10 -- # set +x 00:33:57.471 23:30:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.471 23:30:20 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:57.471 23:30:20 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:57.471 23:30:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:57.471 23:30:20 -- nvmf/common.sh@116 -- # sync 00:33:57.471 23:30:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:57.471 23:30:20 -- nvmf/common.sh@119 -- # set +e 00:33:57.471 23:30:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:57.471 23:30:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:57.471 rmmod nvme_tcp 00:33:57.471 rmmod nvme_fabrics 00:33:57.739 rmmod nvme_keyring 00:33:57.739 23:30:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:57.739 23:30:20 -- nvmf/common.sh@123 -- # set -e 00:33:57.739 23:30:20 -- nvmf/common.sh@124 -- # return 0 00:33:57.739 23:30:20 -- nvmf/common.sh@477 -- # '[' -n 3067459 ']' 00:33:57.739 23:30:20 -- nvmf/common.sh@478 -- # killprocess 3067459 00:33:57.739 23:30:20 -- common/autotest_common.sh@926 -- # '[' -z 3067459 ']' 00:33:57.739 23:30:20 -- common/autotest_common.sh@930 -- # kill -0 3067459 00:33:57.739 23:30:20 -- common/autotest_common.sh@931 -- # uname 00:33:57.739 23:30:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:57.739 23:30:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3067459 00:33:57.739 23:30:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:57.739 23:30:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:57.739 23:30:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3067459' 00:33:57.739 killing process with pid 3067459 00:33:57.739 23:30:20 -- common/autotest_common.sh@945 -- # kill 3067459 00:33:57.739 [2024-06-07 23:30:20.236063] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:57.739 23:30:20 -- common/autotest_common.sh@950 -- # wait 3067459 00:33:57.999 23:30:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:57.999 23:30:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:57.999 23:30:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:57.999 23:30:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:57.999 23:30:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:57.999 23:30:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.999 23:30:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.999 23:30:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.909 23:30:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:59.909 00:33:59.909 real 0m12.539s 00:33:59.909 user 0m9.693s 00:33:59.909 sys 0m6.093s 00:33:59.909 23:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.909 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:33:59.909 ************************************ 00:33:59.909 END TEST nvmf_identify_passthru 00:33:59.909 ************************************ 00:34:00.169 23:30:22 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:00.169 23:30:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:00.169 23:30:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:00.169 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:34:00.169 ************************************ 00:34:00.169 START TEST nvmf_dif 00:34:00.169 ************************************ 00:34:00.169 23:30:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:00.169 * Looking for test storage... 00:34:00.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.169 23:30:22 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.169 23:30:22 -- nvmf/common.sh@7 -- # uname -s 00:34:00.169 23:30:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.169 23:30:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.169 23:30:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.169 23:30:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.169 23:30:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.169 23:30:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.169 23:30:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.169 23:30:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.169 23:30:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.169 23:30:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.169 23:30:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.169 23:30:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.169 23:30:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.169 23:30:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.169 23:30:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.169 23:30:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.169 23:30:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.169 23:30:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.169 23:30:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.169 23:30:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.169 23:30:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.169 23:30:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.169 23:30:22 -- paths/export.sh@5 -- # export PATH 00:34:00.169 23:30:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.169 23:30:22 -- nvmf/common.sh@46 -- # : 0 00:34:00.169 23:30:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:00.169 23:30:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:00.170 23:30:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:00.170 23:30:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.170 23:30:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.170 23:30:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:00.170 23:30:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:00.170 23:30:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:00.170 23:30:22 -- target/dif.sh@15 -- # NULL_META=16 00:34:00.170 23:30:22 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:00.170 23:30:22 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:00.170 23:30:22 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:00.170 23:30:22 -- target/dif.sh@135 -- # nvmftestinit 00:34:00.170 23:30:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:00.170 23:30:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.170 23:30:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:00.170 23:30:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:00.170 23:30:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:00.170 23:30:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.170 23:30:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:00.170 23:30:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.170 23:30:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:00.170 23:30:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:00.170 23:30:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:00.170 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:34:08.304 23:30:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:08.304 23:30:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:08.304 23:30:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:08.304 23:30:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:08.304 23:30:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:08.304 23:30:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:08.304 23:30:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:08.304 23:30:29 -- nvmf/common.sh@294 -- # net_devs=() 00:34:08.304 23:30:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:08.305 23:30:29 -- nvmf/common.sh@295 -- # e810=() 00:34:08.305 23:30:29 -- nvmf/common.sh@295 -- # local -ga e810 00:34:08.305 23:30:29 -- nvmf/common.sh@296 -- # x722=() 00:34:08.305 23:30:29 -- nvmf/common.sh@296 -- # local -ga x722 00:34:08.305 23:30:29 -- nvmf/common.sh@297 -- # mlx=() 00:34:08.305 23:30:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:08.305 23:30:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.305 23:30:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:08.305 23:30:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:08.305 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:08.305 23:30:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:08.305 23:30:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:08.305 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:08.305 23:30:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:08.305 23:30:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.305 23:30:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.305 23:30:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:08.305 Found net devices under 0000:31:00.0: cvl_0_0 00:34:08.305 23:30:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:08.305 23:30:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.305 23:30:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.305 23:30:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:08.305 Found net devices under 0000:31:00.1: cvl_0_1 00:34:08.305 23:30:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:08.305 23:30:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:08.305 23:30:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:08.305 23:30:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.305 23:30:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.305 23:30:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:08.305 23:30:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.305 23:30:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.305 23:30:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:08.305 23:30:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.305 23:30:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.305 23:30:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:08.305 23:30:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:08.305 23:30:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.305 23:30:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.305 23:30:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.305 23:30:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.305 23:30:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:08.305 23:30:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.305 23:30:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.305 23:30:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.305 23:30:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:08.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:34:08.305 00:34:08.305 --- 10.0.0.2 ping statistics --- 00:34:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.305 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:34:08.305 23:30:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:34:08.305 00:34:08.305 --- 10.0.0.1 ping statistics --- 00:34:08.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.305 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:34:08.305 23:30:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.305 23:30:29 -- nvmf/common.sh@410 -- # return 0 00:34:08.305 23:30:29 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:08.305 23:30:29 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:10.852 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:10.852 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:10.852 23:30:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.852 23:30:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:10.852 23:30:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:10.852 23:30:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.852 23:30:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:10.852 23:30:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:10.852 23:30:33 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:10.852 23:30:33 -- target/dif.sh@137 -- # nvmfappstart 00:34:10.852 23:30:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:10.852 23:30:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:10.852 23:30:33 -- common/autotest_common.sh@10 -- # set +x 00:34:10.852 23:30:33 -- nvmf/common.sh@469 -- # nvmfpid=3073621 00:34:10.852 23:30:33 -- nvmf/common.sh@470 -- # waitforlisten 3073621 00:34:10.852 23:30:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:10.852 23:30:33 -- common/autotest_common.sh@819 -- # '[' -z 3073621 ']' 00:34:10.852 23:30:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.852 23:30:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:10.852 23:30:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.852 23:30:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:10.852 23:30:33 -- common/autotest_common.sh@10 -- # set +x 00:34:10.852 [2024-06-07 23:30:33.477296] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:34:10.852 [2024-06-07 23:30:33.477362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.852 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.111 [2024-06-07 23:30:33.550812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.111 [2024-06-07 23:30:33.588262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:11.111 [2024-06-07 23:30:33.588392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.111 [2024-06-07 23:30:33.588402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.111 [2024-06-07 23:30:33.588409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.111 [2024-06-07 23:30:33.588428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.681 23:30:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:11.681 23:30:34 -- common/autotest_common.sh@852 -- # return 0 00:34:11.681 23:30:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:11.681 23:30:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 23:30:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.681 23:30:34 -- target/dif.sh@139 -- # create_transport 00:34:11.681 23:30:34 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:11.681 23:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 [2024-06-07 23:30:34.294535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.681 23:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.681 23:30:34 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:11.681 23:30:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:11.681 23:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 ************************************ 00:34:11.681 START TEST fio_dif_1_default 00:34:11.681 ************************************ 00:34:11.681 23:30:34 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:11.681 23:30:34 -- target/dif.sh@86 -- # create_subsystems 0 00:34:11.681 23:30:34 -- target/dif.sh@28 -- # local sub 00:34:11.681 23:30:34 -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.681 23:30:34 -- target/dif.sh@31 -- # create_subsystem 0 00:34:11.681 23:30:34 -- target/dif.sh@18 -- # local sub_id=0 00:34:11.681 23:30:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:11.681 23:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 bdev_null0 00:34:11.681 23:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.681 23:30:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:11.681 23:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 23:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.681 23:30:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:11.681 23:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.681 23:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.681 23:30:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.681 23:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:11.681 23:30:34 -- common/autotest_common.sh@10 -- # set +x 00:34:11.682 [2024-06-07 23:30:34.338783] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.682 23:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:11.682 23:30:34 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:11.682 23:30:34 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:11.682 23:30:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:11.682 23:30:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.682 23:30:34 -- nvmf/common.sh@520 -- # config=() 00:34:11.682 23:30:34 -- nvmf/common.sh@520 -- # local subsystem config 00:34:11.682 23:30:34 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.682 23:30:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:11.682 23:30:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:11.682 { 00:34:11.682 "params": { 00:34:11.682 "name": "Nvme$subsystem", 00:34:11.682 "trtype": "$TEST_TRANSPORT", 00:34:11.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.682 "adrfam": "ipv4", 00:34:11.682 "trsvcid": "$NVMF_PORT", 00:34:11.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.682 "hdgst": ${hdgst:-false}, 00:34:11.682 "ddgst": ${ddgst:-false} 00:34:11.682 }, 00:34:11.682 "method": "bdev_nvme_attach_controller" 00:34:11.682 } 00:34:11.682 EOF 00:34:11.682 )") 00:34:11.682 23:30:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:11.682 23:30:34 -- target/dif.sh@82 -- # gen_fio_conf 00:34:11.682 23:30:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:11.682 23:30:34 -- target/dif.sh@54 -- # local file 00:34:11.682 23:30:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:11.682 23:30:34 -- target/dif.sh@56 -- # cat 00:34:11.682 23:30:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.682 23:30:34 -- common/autotest_common.sh@1320 -- # shift 00:34:11.682 23:30:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:11.682 23:30:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.682 23:30:34 -- nvmf/common.sh@542 -- # cat 00:34:11.682 23:30:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.682 23:30:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:11.682 23:30:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:11.682 23:30:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.682 23:30:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:11.682 23:30:34 -- nvmf/common.sh@544 -- # jq . 00:34:11.682 23:30:34 -- nvmf/common.sh@545 -- # IFS=, 00:34:11.682 23:30:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:11.682 "params": { 00:34:11.682 "name": "Nvme0", 00:34:11.682 "trtype": "tcp", 00:34:11.682 "traddr": "10.0.0.2", 00:34:11.682 "adrfam": "ipv4", 00:34:11.682 "trsvcid": "4420", 00:34:11.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.682 "hdgst": false, 00:34:11.682 "ddgst": false 00:34:11.682 }, 00:34:11.682 "method": "bdev_nvme_attach_controller" 00:34:11.682 }' 00:34:11.958 23:30:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:11.958 23:30:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:11.958 23:30:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.958 23:30:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.958 23:30:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:11.958 23:30:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:11.958 23:30:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:11.958 23:30:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:11.958 23:30:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:11.958 23:30:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.221 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:12.221 fio-3.35 00:34:12.221 Starting 1 thread 00:34:12.221 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.791 [2024-06-07 23:30:35.226307] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:12.791 [2024-06-07 23:30:35.226349] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:22.880 00:34:22.880 filename0: (groupid=0, jobs=1): err= 0: pid=3074170: Fri Jun 7 23:30:45 2024 00:34:22.881 read: IOPS=186, BW=744KiB/s (762kB/s)(7440KiB/10000msec) 00:34:22.881 slat (nsec): min=5343, max=63397, avg=6150.44, stdev=1912.89 00:34:22.881 clat (usec): min=669, max=44029, avg=21488.70, stdev=20350.91 00:34:22.881 lat (usec): min=674, max=44065, avg=21494.85, stdev=20350.89 00:34:22.881 clat percentiles (usec): 00:34:22.881 | 1.00th=[ 791], 5.00th=[ 914], 10.00th=[ 1029], 20.00th=[ 1090], 00:34:22.881 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[41157], 60.00th=[41681], 00:34:22.881 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:22.881 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:22.881 | 99.99th=[43779] 00:34:22.881 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=744.42, stdev=29.87, samples=19 00:34:22.881 iops : min= 176, max= 192, avg=186.11, stdev= 7.47, samples=19 00:34:22.881 lat (usec) : 750=0.43%, 1000=8.76% 00:34:22.881 lat (msec) : 2=40.70%, 50=50.11% 00:34:22.881 cpu : usr=95.59%, sys=4.18%, ctx=22, majf=0, minf=280 00:34:22.881 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.881 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.881 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:22.881 00:34:22.881 Run status group 0 (all jobs): 00:34:22.881 READ: bw=744KiB/s (762kB/s), 744KiB/s-744KiB/s (762kB/s-762kB/s), io=7440KiB (7619kB), run=10000-10000msec 00:34:22.881 23:30:45 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:22.881 23:30:45 -- target/dif.sh@43 -- # local sub 00:34:22.881 23:30:45 -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.881 23:30:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:22.881 23:30:45 -- target/dif.sh@36 -- # local sub_id=0 00:34:22.881 23:30:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:22.881 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.881 23:30:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:22.881 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.881 00:34:22.881 real 0m11.184s 00:34:22.881 user 0m25.503s 00:34:22.881 sys 0m0.738s 00:34:22.881 23:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 ************************************ 00:34:22.881 END TEST fio_dif_1_default 00:34:22.881 ************************************ 00:34:22.881 23:30:45 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:22.881 23:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:22.881 23:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 ************************************ 00:34:22.881 START TEST fio_dif_1_multi_subsystems 00:34:22.881 ************************************ 00:34:22.881 23:30:45 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:22.881 23:30:45 -- target/dif.sh@92 -- # local files=1 00:34:22.881 23:30:45 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:22.881 23:30:45 -- target/dif.sh@28 -- # local sub 00:34:22.881 23:30:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.881 23:30:45 -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.881 23:30:45 -- target/dif.sh@18 -- # local sub_id=0 00:34:22.881 23:30:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.881 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 bdev_null0 00:34:22.881 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.881 23:30:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.881 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:22.881 23:30:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.881 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:22.881 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.141 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.141 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 [2024-06-07 23:30:45.570875] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.141 23:30:45 -- target/dif.sh@31 -- # create_subsystem 1 00:34:23.141 23:30:45 -- target/dif.sh@18 -- # local sub_id=1 00:34:23.141 23:30:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:23.141 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.141 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 bdev_null1 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:23.141 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.141 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:23.141 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.141 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.141 23:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:23.141 23:30:45 -- common/autotest_common.sh@10 -- # set +x 00:34:23.141 23:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:23.141 23:30:45 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:23.141 23:30:45 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:23.141 23:30:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:23.141 23:30:45 -- nvmf/common.sh@520 -- # config=() 00:34:23.141 23:30:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.141 23:30:45 -- nvmf/common.sh@520 -- # local subsystem config 00:34:23.141 23:30:45 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.141 23:30:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:23.141 23:30:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:23.141 { 00:34:23.141 "params": { 00:34:23.141 "name": "Nvme$subsystem", 00:34:23.141 "trtype": "$TEST_TRANSPORT", 00:34:23.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.141 "adrfam": "ipv4", 00:34:23.141 "trsvcid": "$NVMF_PORT", 00:34:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.141 "hdgst": ${hdgst:-false}, 00:34:23.141 "ddgst": ${ddgst:-false} 00:34:23.141 }, 00:34:23.141 "method": "bdev_nvme_attach_controller" 00:34:23.141 } 00:34:23.141 EOF 00:34:23.141 )") 00:34:23.141 23:30:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:23.141 23:30:45 -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.141 23:30:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.141 23:30:45 -- target/dif.sh@54 -- # local file 00:34:23.141 23:30:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:23.141 23:30:45 -- target/dif.sh@56 -- # cat 00:34:23.141 23:30:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.141 23:30:45 -- common/autotest_common.sh@1320 -- # shift 00:34:23.141 23:30:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:23.141 23:30:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.141 23:30:45 -- nvmf/common.sh@542 -- # cat 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.141 23:30:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:23.141 23:30:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:23.141 23:30:45 -- target/dif.sh@73 -- # cat 00:34:23.141 23:30:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:23.141 23:30:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:23.141 { 00:34:23.141 "params": { 00:34:23.141 "name": "Nvme$subsystem", 00:34:23.141 "trtype": "$TEST_TRANSPORT", 00:34:23.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.141 "adrfam": "ipv4", 00:34:23.141 "trsvcid": "$NVMF_PORT", 00:34:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.141 "hdgst": ${hdgst:-false}, 00:34:23.141 "ddgst": ${ddgst:-false} 00:34:23.141 }, 00:34:23.141 "method": "bdev_nvme_attach_controller" 00:34:23.141 } 00:34:23.141 EOF 00:34:23.141 )") 00:34:23.141 23:30:45 -- target/dif.sh@72 -- # (( file++ )) 00:34:23.141 23:30:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.141 23:30:45 -- nvmf/common.sh@542 -- # cat 00:34:23.141 23:30:45 -- nvmf/common.sh@544 -- # jq . 00:34:23.141 23:30:45 -- nvmf/common.sh@545 -- # IFS=, 00:34:23.141 23:30:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:23.141 "params": { 00:34:23.141 "name": "Nvme0", 00:34:23.141 "trtype": "tcp", 00:34:23.141 "traddr": "10.0.0.2", 00:34:23.141 "adrfam": "ipv4", 00:34:23.141 "trsvcid": "4420", 00:34:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.141 "hdgst": false, 00:34:23.141 "ddgst": false 00:34:23.141 }, 00:34:23.141 "method": "bdev_nvme_attach_controller" 00:34:23.141 },{ 00:34:23.141 "params": { 00:34:23.141 "name": "Nvme1", 00:34:23.141 "trtype": "tcp", 00:34:23.141 "traddr": "10.0.0.2", 00:34:23.141 "adrfam": "ipv4", 00:34:23.141 "trsvcid": "4420", 00:34:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.141 "hdgst": false, 00:34:23.141 "ddgst": false 00:34:23.141 }, 00:34:23.141 "method": "bdev_nvme_attach_controller" 00:34:23.141 }' 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:23.141 23:30:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:23.141 23:30:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:23.141 23:30:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:23.141 23:30:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:23.142 23:30:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:23.142 23:30:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.401 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.401 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.401 fio-3.35 00:34:23.401 Starting 2 threads 00:34:23.401 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.969 [2024-06-07 23:30:46.620065] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:23.969 [2024-06-07 23:30:46.620107] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:36.194 00:34:36.194 filename0: (groupid=0, jobs=1): err= 0: pid=3076497: Fri Jun 7 23:30:56 2024 00:34:36.194 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10004msec) 00:34:36.194 slat (nsec): min=5334, max=62211, avg=6332.89, stdev=2445.73 00:34:36.194 clat (usec): min=40845, max=42991, avg=41836.18, stdev=358.66 00:34:36.194 lat (usec): min=40853, max=42997, avg=41842.51, stdev=358.49 00:34:36.194 clat percentiles (usec): 00:34:36.194 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:36.194 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:36.194 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:36.194 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:36.194 | 99.99th=[42730] 00:34:36.194 bw ( KiB/s): min= 352, max= 416, per=50.11%, avg=382.32, stdev=12.95, samples=19 00:34:36.194 iops : min= 88, max= 104, avg=95.58, stdev= 3.24, samples=19 00:34:36.194 lat (msec) : 50=100.00% 00:34:36.194 cpu : usr=97.07%, sys=2.69%, ctx=14, majf=0, minf=240 00:34:36.194 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.194 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.194 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:36.194 filename1: (groupid=0, jobs=1): err= 0: pid=3076498: Fri Jun 7 23:30:56 2024 00:34:36.194 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10033msec) 00:34:36.194 slat (nsec): min=5331, max=36805, avg=6255.69, stdev=1591.90 00:34:36.194 clat (usec): min=40889, max=43101, avg=41958.27, stdev=257.35 00:34:36.194 lat (usec): min=40894, max=43138, avg=41964.52, stdev=257.58 00:34:36.194 clat percentiles (usec): 00:34:36.194 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:36.194 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:36.194 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:36.194 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:36.194 | 99.99th=[43254] 00:34:36.194 bw ( KiB/s): min= 352, max= 384, per=49.85%, avg=380.80, stdev= 9.85, samples=20 00:34:36.194 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:34:36.194 lat (msec) : 50=100.00% 00:34:36.194 cpu : usr=97.36%, sys=2.40%, ctx=13, majf=0, minf=117 00:34:36.194 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.194 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.194 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:36.194 00:34:36.194 Run status group 0 (all jobs): 00:34:36.194 READ: bw=762KiB/s (781kB/s), 381KiB/s-382KiB/s (390kB/s-391kB/s), io=7648KiB (7832kB), run=10004-10033msec 00:34:36.194 23:30:56 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:36.194 23:30:56 -- target/dif.sh@43 -- # local sub 00:34:36.194 23:30:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.194 23:30:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.194 23:30:56 -- target/dif.sh@36 -- # local sub_id=0 00:34:36.194 23:30:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.194 23:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.194 23:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:56 -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.194 23:30:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:36.194 23:30:56 -- target/dif.sh@36 -- # local sub_id=1 00:34:36.194 23:30:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.194 23:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:36.194 23:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 00:34:36.194 real 0m11.401s 00:34:36.194 user 0m33.145s 00:34:36.194 sys 0m0.868s 00:34:36.194 23:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 ************************************ 00:34:36.194 END TEST fio_dif_1_multi_subsystems 00:34:36.194 ************************************ 00:34:36.194 23:30:56 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:36.194 23:30:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:36.194 23:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 ************************************ 00:34:36.194 START TEST fio_dif_rand_params 00:34:36.194 ************************************ 00:34:36.194 23:30:56 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:36.194 23:30:56 -- target/dif.sh@100 -- # local NULL_DIF 00:34:36.194 23:30:56 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:36.194 23:30:56 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:36.194 23:30:56 -- target/dif.sh@103 -- # bs=128k 00:34:36.194 23:30:56 -- target/dif.sh@103 -- # numjobs=3 00:34:36.194 23:30:56 -- target/dif.sh@103 -- # iodepth=3 00:34:36.194 23:30:56 -- target/dif.sh@103 -- # runtime=5 00:34:36.194 23:30:56 -- target/dif.sh@105 -- # create_subsystems 0 00:34:36.194 23:30:56 -- target/dif.sh@28 -- # local sub 00:34:36.194 23:30:56 -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.194 23:30:56 -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.194 23:30:56 -- target/dif.sh@18 -- # local sub_id=0 00:34:36.194 23:30:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:36.194 23:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:56 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 bdev_null0 00:34:36.194 23:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.194 23:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.194 23:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 23:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.194 23:30:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.194 23:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:36.194 23:30:57 -- common/autotest_common.sh@10 -- # set +x 00:34:36.194 [2024-06-07 23:30:57.019721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.195 23:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:36.195 23:30:57 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:36.195 23:30:57 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:36.195 23:30:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.195 23:30:57 -- nvmf/common.sh@520 -- # config=() 00:34:36.195 23:30:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.195 23:30:57 -- nvmf/common.sh@520 -- # local subsystem config 00:34:36.195 23:30:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.195 23:30:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:36.195 23:30:57 -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.195 23:30:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:36.195 23:30:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:36.195 { 00:34:36.195 "params": { 00:34:36.195 "name": "Nvme$subsystem", 00:34:36.195 "trtype": "$TEST_TRANSPORT", 00:34:36.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.195 "adrfam": "ipv4", 00:34:36.195 "trsvcid": "$NVMF_PORT", 00:34:36.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.195 "hdgst": ${hdgst:-false}, 00:34:36.195 "ddgst": ${ddgst:-false} 00:34:36.195 }, 00:34:36.195 "method": "bdev_nvme_attach_controller" 00:34:36.195 } 00:34:36.195 EOF 00:34:36.195 )") 00:34:36.195 23:30:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.195 23:30:57 -- target/dif.sh@54 -- # local file 00:34:36.195 23:30:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:36.195 23:30:57 -- target/dif.sh@56 -- # cat 00:34:36.195 23:30:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.195 23:30:57 -- common/autotest_common.sh@1320 -- # shift 00:34:36.195 23:30:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:36.195 23:30:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.195 23:30:57 -- nvmf/common.sh@542 -- # cat 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.195 23:30:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:36.195 23:30:57 -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:36.195 23:30:57 -- nvmf/common.sh@544 -- # jq . 00:34:36.195 23:30:57 -- nvmf/common.sh@545 -- # IFS=, 00:34:36.195 23:30:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:36.195 "params": { 00:34:36.195 "name": "Nvme0", 00:34:36.195 "trtype": "tcp", 00:34:36.195 "traddr": "10.0.0.2", 00:34:36.195 "adrfam": "ipv4", 00:34:36.195 "trsvcid": "4420", 00:34:36.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.195 "hdgst": false, 00:34:36.195 "ddgst": false 00:34:36.195 }, 00:34:36.195 "method": "bdev_nvme_attach_controller" 00:34:36.195 }' 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:36.195 23:30:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:36.195 23:30:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:36.195 23:30:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:36.195 23:30:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:36.195 23:30:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.195 23:30:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.195 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:36.195 ... 00:34:36.195 fio-3.35 00:34:36.195 Starting 3 threads 00:34:36.195 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.195 [2024-06-07 23:30:57.788463] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:36.195 [2024-06-07 23:30:57.788507] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:40.399 00:34:40.399 filename0: (groupid=0, jobs=1): err= 0: pid=3078808: Fri Jun 7 23:31:02 2024 00:34:40.399 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(119MiB/5047msec) 00:34:40.399 slat (nsec): min=7835, max=30464, avg=8565.65, stdev=1250.92 00:34:40.399 clat (usec): min=5755, max=94106, avg=15878.46, stdev=15386.53 00:34:40.399 lat (usec): min=5763, max=94114, avg=15887.02, stdev=15386.55 00:34:40.399 clat percentiles (usec): 00:34:40.399 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[ 8586], 00:34:40.399 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11600], 00:34:40.399 | 70.00th=[12256], 80.00th=[13566], 90.00th=[49546], 95.00th=[52167], 00:34:40.399 | 99.00th=[89654], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:34:40.399 | 99.99th=[93848] 00:34:40.399 bw ( KiB/s): min= 9728, max=38656, per=31.22%, avg=24246.30, stdev=8162.70, samples=10 00:34:40.399 iops : min= 76, max= 302, avg=189.40, stdev=63.80, samples=10 00:34:40.399 lat (msec) : 10=40.84%, 20=46.84%, 50=2.95%, 100=9.37% 00:34:40.399 cpu : usr=95.94%, sys=3.80%, ctx=8, majf=0, minf=183 00:34:40.399 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 issued rwts: total=950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.399 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.399 filename0: (groupid=0, jobs=1): err= 0: pid=3078809: Fri Jun 7 23:31:02 2024 00:34:40.399 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(122MiB/5048msec) 00:34:40.399 slat (nsec): min=5464, max=32373, avg=8251.95, stdev=1917.71 00:34:40.399 clat (usec): min=5922, max=91440, avg=15524.36, stdev=13940.03 00:34:40.399 lat (usec): min=5930, max=91449, avg=15532.61, stdev=13940.02 00:34:40.399 clat percentiles (usec): 00:34:40.399 | 1.00th=[ 6456], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8586], 00:34:40.399 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11207], 00:34:40.399 | 70.00th=[12256], 80.00th=[13698], 90.00th=[49021], 95.00th=[51643], 00:34:40.399 | 99.00th=[53740], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:34:40.399 | 99.99th=[91751] 00:34:40.399 bw ( KiB/s): min=17664, max=42240, per=31.94%, avg=24806.40, stdev=7881.63, samples=10 00:34:40.399 iops : min= 138, max= 330, avg=193.80, stdev=61.58, samples=10 00:34:40.399 lat (msec) : 10=42.39%, 20=44.86%, 50=3.81%, 100=8.95% 00:34:40.399 cpu : usr=96.28%, sys=3.47%, ctx=7, majf=0, minf=95 00:34:40.399 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.399 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.399 filename0: (groupid=0, jobs=1): err= 0: pid=3078810: Fri Jun 7 23:31:02 2024 00:34:40.399 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(143MiB/5042msec) 00:34:40.399 slat (nsec): min=5414, max=32247, avg=8112.23, stdev=1981.50 00:34:40.399 clat (usec): min=5506, max=56035, avg=13209.02, stdev=11292.91 00:34:40.399 lat (usec): min=5511, max=56044, avg=13217.13, stdev=11293.02 00:34:40.399 clat percentiles (usec): 00:34:40.399 | 1.00th=[ 6128], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8356], 00:34:40.399 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:34:40.399 | 70.00th=[11207], 80.00th=[11994], 90.00th=[14091], 95.00th=[49546], 00:34:40.399 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[55837], 00:34:40.399 | 99.99th=[55837] 00:34:40.399 bw ( KiB/s): min=17408, max=41216, per=37.54%, avg=29158.40, stdev=7060.99, samples=10 00:34:40.399 iops : min= 136, max= 322, avg=227.80, stdev=55.16, samples=10 00:34:40.399 lat (msec) : 10=51.53%, 20=40.14%, 50=4.21%, 100=4.12% 00:34:40.399 cpu : usr=95.46%, sys=4.27%, ctx=13, majf=0, minf=127 00:34:40.399 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.399 issued rwts: total=1141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.399 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.399 00:34:40.399 Run status group 0 (all jobs): 00:34:40.399 READ: bw=75.8MiB/s (79.5MB/s), 23.5MiB/s-28.3MiB/s (24.7MB/s-29.7MB/s), io=383MiB (401MB), run=5042-5048msec 00:34:40.660 23:31:03 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:40.660 23:31:03 -- target/dif.sh@43 -- # local sub 00:34:40.660 23:31:03 -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.660 23:31:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.660 23:31:03 -- target/dif.sh@36 -- # local sub_id=0 00:34:40.660 23:31:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # bs=4k 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # numjobs=8 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # iodepth=16 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # runtime= 00:34:40.660 23:31:03 -- target/dif.sh@109 -- # files=2 00:34:40.660 23:31:03 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:40.660 23:31:03 -- target/dif.sh@28 -- # local sub 00:34:40.660 23:31:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.660 23:31:03 -- target/dif.sh@31 -- # create_subsystem 0 00:34:40.660 23:31:03 -- target/dif.sh@18 -- # local sub_id=0 00:34:40.660 23:31:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 bdev_null0 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 [2024-06-07 23:31:03.145509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.660 23:31:03 -- target/dif.sh@31 -- # create_subsystem 1 00:34:40.660 23:31:03 -- target/dif.sh@18 -- # local sub_id=1 00:34:40.660 23:31:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 bdev_null1 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.660 23:31:03 -- target/dif.sh@31 -- # create_subsystem 2 00:34:40.660 23:31:03 -- target/dif.sh@18 -- # local sub_id=2 00:34:40.660 23:31:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 bdev_null2 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:40.660 23:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:40.660 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.660 23:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:40.660 23:31:03 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:40.660 23:31:03 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:40.660 23:31:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:40.660 23:31:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.660 23:31:03 -- nvmf/common.sh@520 -- # config=() 00:34:40.660 23:31:03 -- nvmf/common.sh@520 -- # local subsystem config 00:34:40.660 23:31:03 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.661 23:31:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:40.661 23:31:03 -- target/dif.sh@82 -- # gen_fio_conf 00:34:40.661 23:31:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:40.661 { 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme$subsystem", 00:34:40.661 "trtype": "$TEST_TRANSPORT", 00:34:40.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "$NVMF_PORT", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.661 "hdgst": ${hdgst:-false}, 00:34:40.661 "ddgst": ${ddgst:-false} 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 } 00:34:40.661 EOF 00:34:40.661 )") 00:34:40.661 23:31:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:40.661 23:31:03 -- target/dif.sh@54 -- # local file 00:34:40.661 23:31:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:40.661 23:31:03 -- target/dif.sh@56 -- # cat 00:34:40.661 23:31:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.661 23:31:03 -- common/autotest_common.sh@1320 -- # shift 00:34:40.661 23:31:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:40.661 23:31:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # cat 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:40.661 23:31:03 -- target/dif.sh@73 -- # cat 00:34:40.661 23:31:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:40.661 { 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme$subsystem", 00:34:40.661 "trtype": "$TEST_TRANSPORT", 00:34:40.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "$NVMF_PORT", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.661 "hdgst": ${hdgst:-false}, 00:34:40.661 "ddgst": ${ddgst:-false} 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 } 00:34:40.661 EOF 00:34:40.661 )") 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file++ )) 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.661 23:31:03 -- target/dif.sh@73 -- # cat 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # cat 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file++ )) 00:34:40.661 23:31:03 -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.661 23:31:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:40.661 { 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme$subsystem", 00:34:40.661 "trtype": "$TEST_TRANSPORT", 00:34:40.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "$NVMF_PORT", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.661 "hdgst": ${hdgst:-false}, 00:34:40.661 "ddgst": ${ddgst:-false} 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 } 00:34:40.661 EOF 00:34:40.661 )") 00:34:40.661 23:31:03 -- nvmf/common.sh@542 -- # cat 00:34:40.661 23:31:03 -- nvmf/common.sh@544 -- # jq . 00:34:40.661 23:31:03 -- nvmf/common.sh@545 -- # IFS=, 00:34:40.661 23:31:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme0", 00:34:40.661 "trtype": "tcp", 00:34:40.661 "traddr": "10.0.0.2", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "4420", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:40.661 "hdgst": false, 00:34:40.661 "ddgst": false 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 },{ 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme1", 00:34:40.661 "trtype": "tcp", 00:34:40.661 "traddr": "10.0.0.2", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "4420", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:40.661 "hdgst": false, 00:34:40.661 "ddgst": false 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 },{ 00:34:40.661 "params": { 00:34:40.661 "name": "Nvme2", 00:34:40.661 "trtype": "tcp", 00:34:40.661 "traddr": "10.0.0.2", 00:34:40.661 "adrfam": "ipv4", 00:34:40.661 "trsvcid": "4420", 00:34:40.661 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:40.661 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:40.661 "hdgst": false, 00:34:40.661 "ddgst": false 00:34:40.661 }, 00:34:40.661 "method": "bdev_nvme_attach_controller" 00:34:40.661 }' 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:40.661 23:31:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:40.661 23:31:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:40.661 23:31:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:40.661 23:31:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:40.661 23:31:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:40.661 23:31:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.256 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.256 ... 00:34:41.256 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.256 ... 00:34:41.256 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.256 ... 00:34:41.256 fio-3.35 00:34:41.256 Starting 24 threads 00:34:41.256 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.826 [2024-06-07 23:31:04.468069] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:41.826 [2024-06-07 23:31:04.468115] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:54.056 00:34:54.056 filename0: (groupid=0, jobs=1): err= 0: pid=3080233: Fri Jun 7 23:31:14 2024 00:34:54.056 read: IOPS=542, BW=2170KiB/s (2222kB/s)(21.3MiB/10035msec) 00:34:54.056 slat (nsec): min=5516, max=88199, avg=14192.21, stdev=13742.59 00:34:54.056 clat (usec): min=2351, max=56003, avg=29377.35, stdev=6202.08 00:34:54.056 lat (usec): min=2367, max=56011, avg=29391.55, stdev=6203.81 00:34:54.056 clat percentiles (usec): 00:34:54.056 | 1.00th=[ 4178], 5.00th=[19268], 10.00th=[21103], 20.00th=[26346], 00:34:54.056 | 30.00th=[29754], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:34:54.056 | 70.00th=[30802], 80.00th=[31327], 90.00th=[34866], 95.00th=[38536], 00:34:54.056 | 99.00th=[50070], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:34:54.056 | 99.99th=[55837] 00:34:54.056 bw ( KiB/s): min= 1904, max= 2560, per=4.36%, avg=2173.60, stdev=139.41, samples=20 00:34:54.056 iops : min= 476, max= 640, avg=543.40, stdev=34.85, samples=20 00:34:54.056 lat (msec) : 4=0.96%, 10=0.51%, 20=4.94%, 50=92.56%, 100=1.03% 00:34:54.056 cpu : usr=99.07%, sys=0.63%, ctx=23, majf=0, minf=52 00:34:54.056 IO depths : 1=0.9%, 2=1.9%, 4=8.2%, 8=75.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:34:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.056 complete : 0=0.0%, 4=90.0%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.056 issued rwts: total=5444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.056 filename0: (groupid=0, jobs=1): err= 0: pid=3080234: Fri Jun 7 23:31:14 2024 00:34:54.056 read: IOPS=515, BW=2062KiB/s (2111kB/s)(20.1MiB/10008msec) 00:34:54.056 slat (nsec): min=5656, max=94411, avg=27174.12, stdev=15447.63 00:34:54.056 clat (usec): min=12801, max=51446, avg=30792.88, stdev=2217.34 00:34:54.056 lat (usec): min=12807, max=51454, avg=30820.05, stdev=2216.93 00:34:54.056 clat percentiles (usec): 00:34:54.056 | 1.00th=[25560], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.056 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:34:54.056 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:34:54.056 | 99.00th=[41681], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:34:54.056 | 99.99th=[51643] 00:34:54.056 bw ( KiB/s): min= 1856, max= 2176, per=4.13%, avg=2056.47, stdev=79.19, samples=19 00:34:54.056 iops : min= 464, max= 544, avg=514.00, stdev=19.75, samples=19 00:34:54.056 lat (msec) : 20=0.31%, 50=99.57%, 100=0.12% 00:34:54.056 cpu : usr=99.00%, sys=0.62%, ctx=123, majf=0, minf=30 00:34:54.056 IO depths : 1=4.7%, 2=10.5%, 4=24.2%, 8=52.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:34:54.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.056 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.056 issued rwts: total=5158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.056 filename0: (groupid=0, jobs=1): err= 0: pid=3080235: Fri Jun 7 23:31:14 2024 00:34:54.056 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:34:54.056 slat (nsec): min=5547, max=97761, avg=33470.80, stdev=16982.84 00:34:54.056 clat (usec): min=18640, max=58553, avg=30612.36, stdev=1026.93 00:34:54.056 lat (usec): min=18648, max=58568, avg=30645.83, stdev=1025.91 00:34:54.056 clat percentiles (usec): 00:34:54.056 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.056 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.057 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:34:54.057 | 99.00th=[31851], 99.50th=[32375], 99.90th=[43254], 99.95th=[43254], 00:34:54.057 | 99.99th=[58459] 00:34:54.057 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2067.68, stdev=64.38, samples=19 00:34:54.057 iops : min= 480, max= 544, avg=516.84, stdev=16.13, samples=19 00:34:54.057 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:34:54.057 cpu : usr=99.15%, sys=0.50%, ctx=72, majf=0, minf=27 00:34:54.057 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3080236: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10003msec) 00:34:54.057 slat (nsec): min=5739, max=93601, avg=31404.72, stdev=16621.19 00:34:54.057 clat (usec): min=13365, max=49102, avg=30566.79, stdev=1484.63 00:34:54.057 lat (usec): min=13372, max=49118, avg=30598.19, stdev=1484.85 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.057 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.057 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31327], 95.00th=[31327], 00:34:54.057 | 99.00th=[31851], 99.50th=[32375], 99.90th=[49021], 99.95th=[49021], 00:34:54.057 | 99.99th=[49021] 00:34:54.057 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2067.68, stdev=64.38, samples=19 00:34:54.057 iops : min= 480, max= 544, avg=516.84, stdev=16.13, samples=19 00:34:54.057 lat (msec) : 20=0.31%, 50=99.69% 00:34:54.057 cpu : usr=97.07%, sys=1.43%, ctx=226, majf=0, minf=41 00:34:54.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3080237: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10014msec) 00:34:54.057 slat (nsec): min=5515, max=77248, avg=10007.85, stdev=6925.46 00:34:54.057 clat (usec): min=13820, max=67140, avg=30702.23, stdev=2239.19 00:34:54.057 lat (usec): min=13849, max=67165, avg=30712.24, stdev=2239.28 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[21365], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:34:54.057 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.057 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[31589], 00:34:54.057 | 99.00th=[34866], 99.50th=[36963], 99.90th=[54264], 99.95th=[66847], 00:34:54.057 | 99.99th=[67634] 00:34:54.057 bw ( KiB/s): min= 1920, max= 2192, per=4.17%, avg=2076.11, stdev=71.11, samples=19 00:34:54.057 iops : min= 480, max= 548, avg=518.95, stdev=17.82, samples=19 00:34:54.057 lat (msec) : 20=0.52%, 50=99.23%, 100=0.25% 00:34:54.057 cpu : usr=97.97%, sys=1.13%, ctx=49, majf=0, minf=58 00:34:54.057 IO depths : 1=5.2%, 2=10.6%, 4=22.4%, 8=54.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3080238: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=519, BW=2078KiB/s (2127kB/s)(20.3MiB/10023msec) 00:34:54.057 slat (nsec): min=5630, max=84737, avg=23219.16, stdev=13666.45 00:34:54.057 clat (usec): min=16061, max=43236, avg=30603.57, stdev=1378.27 00:34:54.057 lat (usec): min=16081, max=43244, avg=30626.79, stdev=1378.15 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.057 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:34:54.057 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.057 | 99.00th=[34866], 99.50th=[37487], 99.90th=[38536], 99.95th=[41157], 00:34:54.057 | 99.99th=[43254] 00:34:54.057 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2074.95, stdev=50.57, samples=20 00:34:54.057 iops : min= 510, max= 544, avg=518.55, stdev=12.67, samples=20 00:34:54.057 lat (msec) : 20=0.27%, 50=99.73% 00:34:54.057 cpu : usr=98.93%, sys=0.72%, ctx=42, majf=0, minf=55 00:34:54.057 IO depths : 1=4.9%, 2=10.8%, 4=24.3%, 8=52.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3080239: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:34:54.057 slat (nsec): min=5517, max=84251, avg=14703.79, stdev=12133.97 00:34:54.057 clat (usec): min=28091, max=43019, avg=30792.21, stdev=833.46 00:34:54.057 lat (usec): min=28098, max=43044, avg=30806.92, stdev=832.04 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[29230], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:34:54.057 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.057 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.057 | 99.00th=[31851], 99.50th=[32637], 99.90th=[42730], 99.95th=[43254], 00:34:54.057 | 99.99th=[43254] 00:34:54.057 bw ( KiB/s): min= 1916, max= 2176, per=4.15%, avg=2067.47, stdev=64.90, samples=19 00:34:54.057 iops : min= 479, max= 544, avg=516.79, stdev=16.26, samples=19 00:34:54.057 lat (msec) : 50=100.00% 00:34:54.057 cpu : usr=99.29%, sys=0.42%, ctx=21, majf=0, minf=61 00:34:54.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename0: (groupid=0, jobs=1): err= 0: pid=3080240: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=518, BW=2074KiB/s (2124kB/s)(20.3MiB/10014msec) 00:34:54.057 slat (nsec): min=5545, max=91660, avg=24160.84, stdev=13230.39 00:34:54.057 clat (usec): min=13604, max=58171, avg=30659.72, stdev=2600.65 00:34:54.057 lat (usec): min=13611, max=58192, avg=30683.88, stdev=2601.14 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[19530], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:34:54.057 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:34:54.057 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:34:54.057 | 99.00th=[37487], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:34:54.057 | 99.99th=[57934] 00:34:54.057 bw ( KiB/s): min= 1920, max= 2192, per=4.16%, avg=2071.05, stdev=73.68, samples=19 00:34:54.057 iops : min= 480, max= 548, avg=517.68, stdev=18.38, samples=19 00:34:54.057 lat (msec) : 20=1.21%, 50=98.59%, 100=0.19% 00:34:54.057 cpu : usr=99.17%, sys=0.51%, ctx=55, majf=0, minf=52 00:34:54.057 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename1: (groupid=0, jobs=1): err= 0: pid=3080241: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=539, BW=2157KiB/s (2208kB/s)(21.1MiB/10034msec) 00:34:54.057 slat (nsec): min=5393, max=93841, avg=11146.04, stdev=9593.01 00:34:54.057 clat (usec): min=8007, max=56716, avg=29579.24, stdev=6078.31 00:34:54.057 lat (usec): min=8021, max=56722, avg=29590.38, stdev=6079.15 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[12518], 5.00th=[18482], 10.00th=[20055], 20.00th=[26870], 00:34:54.057 | 30.00th=[29754], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:34:54.057 | 70.00th=[30802], 80.00th=[31327], 90.00th=[33817], 95.00th=[40633], 00:34:54.057 | 99.00th=[50070], 99.50th=[51119], 99.90th=[55837], 99.95th=[56886], 00:34:54.057 | 99.99th=[56886] 00:34:54.057 bw ( KiB/s): min= 2000, max= 2368, per=4.33%, avg=2158.75, stdev=115.43, samples=20 00:34:54.057 iops : min= 500, max= 592, avg=539.50, stdev=28.75, samples=20 00:34:54.057 lat (msec) : 10=0.33%, 20=9.61%, 50=89.06%, 100=1.00% 00:34:54.057 cpu : usr=98.98%, sys=0.70%, ctx=27, majf=0, minf=72 00:34:54.057 IO depths : 1=0.8%, 2=1.7%, 4=9.4%, 8=74.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=90.2%, 8=5.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.057 filename1: (groupid=0, jobs=1): err= 0: pid=3080242: Fri Jun 7 23:31:14 2024 00:34:54.057 read: IOPS=534, BW=2139KiB/s (2190kB/s)(20.9MiB/10029msec) 00:34:54.057 slat (nsec): min=2971, max=37773, avg=6617.65, stdev=1917.70 00:34:54.057 clat (usec): min=987, max=32570, avg=29859.99, stdev=4863.11 00:34:54.057 lat (usec): min=993, max=32578, avg=29866.61, stdev=4863.19 00:34:54.057 clat percentiles (usec): 00:34:54.057 | 1.00th=[ 1909], 5.00th=[29492], 10.00th=[30016], 20.00th=[30540], 00:34:54.057 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:34:54.057 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.057 | 99.00th=[32375], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:34:54.057 | 99.99th=[32637] 00:34:54.057 bw ( KiB/s): min= 2043, max= 3352, per=4.29%, avg=2138.30, stdev=290.46, samples=20 00:34:54.057 iops : min= 510, max= 838, avg=534.50, stdev=72.64, samples=20 00:34:54.057 lat (usec) : 1000=0.06% 00:34:54.057 lat (msec) : 2=1.03%, 4=1.31%, 10=0.48%, 20=0.56%, 50=96.57% 00:34:54.057 cpu : usr=99.08%, sys=0.58%, ctx=47, majf=0, minf=76 00:34:54.057 IO depths : 1=6.0%, 2=12.1%, 4=24.2%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:54.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.057 issued rwts: total=5363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080243: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=521, BW=2086KiB/s (2137kB/s)(20.4MiB/10013msec) 00:34:54.058 slat (nsec): min=5516, max=95753, avg=10874.29, stdev=8743.40 00:34:54.058 clat (usec): min=5008, max=55053, avg=30584.13, stdev=7704.22 00:34:54.058 lat (usec): min=5014, max=55058, avg=30595.00, stdev=7703.18 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[10028], 5.00th=[12125], 10.00th=[24773], 20.00th=[30016], 00:34:54.058 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.058 | 70.00th=[31065], 80.00th=[31589], 90.00th=[33162], 95.00th=[49546], 00:34:54.058 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:34:54.058 | 99.99th=[55313] 00:34:54.058 bw ( KiB/s): min= 1992, max= 2336, per=4.19%, avg=2089.58, stdev=76.87, samples=19 00:34:54.058 iops : min= 498, max= 584, avg=522.32, stdev=19.26, samples=19 00:34:54.058 lat (msec) : 10=0.75%, 20=7.95%, 50=86.87%, 100=4.44% 00:34:54.058 cpu : usr=99.08%, sys=0.57%, ctx=100, majf=0, minf=71 00:34:54.058 IO depths : 1=0.1%, 2=0.4%, 4=14.3%, 8=70.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=92.1%, 8=4.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080244: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10003msec) 00:34:54.058 slat (nsec): min=5528, max=89127, avg=28673.98, stdev=14956.49 00:34:54.058 clat (usec): min=13331, max=48926, avg=30605.27, stdev=1477.95 00:34:54.058 lat (usec): min=13346, max=48943, avg=30633.95, stdev=1477.81 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.058 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.058 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.058 | 99.00th=[31851], 99.50th=[32637], 99.90th=[49021], 99.95th=[49021], 00:34:54.058 | 99.99th=[49021] 00:34:54.058 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2067.68, stdev=64.38, samples=19 00:34:54.058 iops : min= 480, max= 544, avg=516.84, stdev=16.13, samples=19 00:34:54.058 lat (msec) : 20=0.31%, 50=99.69% 00:34:54.058 cpu : usr=99.42%, sys=0.29%, ctx=32, majf=0, minf=70 00:34:54.058 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080245: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10023msec) 00:34:54.058 slat (nsec): min=5576, max=57785, avg=10693.99, stdev=7026.98 00:34:54.058 clat (usec): min=16117, max=42124, avg=30750.70, stdev=1474.11 00:34:54.058 lat (usec): min=16131, max=42135, avg=30761.39, stdev=1474.17 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[24773], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.058 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.058 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[32113], 00:34:54.058 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:34:54.058 | 99.99th=[42206] 00:34:54.058 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2072.55, stdev=46.79, samples=20 00:34:54.058 iops : min= 510, max= 544, avg=517.95, stdev=11.72, samples=20 00:34:54.058 lat (msec) : 20=0.27%, 50=99.73% 00:34:54.058 cpu : usr=98.15%, sys=0.95%, ctx=104, majf=0, minf=50 00:34:54.058 IO depths : 1=2.1%, 2=8.1%, 4=24.3%, 8=55.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080246: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10006msec) 00:34:54.058 slat (usec): min=5, max=114, avg=16.68, stdev=17.41 00:34:54.058 clat (usec): min=10325, max=67627, avg=32226.08, stdev=5708.43 00:34:54.058 lat (usec): min=10332, max=67650, avg=32242.76, stdev=5708.15 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[18744], 5.00th=[25560], 10.00th=[28443], 20.00th=[30278], 00:34:54.058 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[31327], 00:34:54.058 | 70.00th=[31589], 80.00th=[34341], 90.00th=[38011], 95.00th=[43779], 00:34:54.058 | 99.00th=[52167], 99.50th=[53216], 99.90th=[67634], 99.95th=[67634], 00:34:54.058 | 99.99th=[67634] 00:34:54.058 bw ( KiB/s): min= 1795, max= 2120, per=3.96%, avg=1974.26, stdev=78.54, samples=19 00:34:54.058 iops : min= 448, max= 530, avg=493.53, stdev=19.73, samples=19 00:34:54.058 lat (msec) : 20=1.96%, 50=94.73%, 100=3.31% 00:34:54.058 cpu : usr=99.16%, sys=0.53%, ctx=27, majf=0, minf=55 00:34:54.058 IO depths : 1=0.1%, 2=0.1%, 4=5.0%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=89.7%, 8=7.9%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080247: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=517, BW=2070KiB/s (2119kB/s)(20.2MiB/10019msec) 00:34:54.058 slat (nsec): min=5517, max=99863, avg=20636.43, stdev=18061.53 00:34:54.058 clat (usec): min=24820, max=48095, avg=30767.48, stdev=1127.59 00:34:54.058 lat (usec): min=24828, max=48110, avg=30788.12, stdev=1124.78 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:34:54.058 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.058 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.058 | 99.00th=[31851], 99.50th=[32900], 99.90th=[47973], 99.95th=[47973], 00:34:54.058 | 99.99th=[47973] 00:34:54.058 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2067.10, stdev=62.36, samples=20 00:34:54.058 iops : min= 480, max= 544, avg=516.70, stdev=15.70, samples=20 00:34:54.058 lat (msec) : 50=100.00% 00:34:54.058 cpu : usr=99.15%, sys=0.51%, ctx=100, majf=0, minf=36 00:34:54.058 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename1: (groupid=0, jobs=1): err= 0: pid=3080248: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.3MiB/10019msec) 00:34:54.058 slat (nsec): min=5539, max=99847, avg=26356.22, stdev=18139.22 00:34:54.058 clat (usec): min=17910, max=46219, avg=30608.36, stdev=1557.86 00:34:54.058 lat (usec): min=17916, max=46235, avg=30634.71, stdev=1557.63 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[24773], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.058 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:34:54.058 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.058 | 99.00th=[35390], 99.50th=[35914], 99.90th=[46400], 99.95th=[46400], 00:34:54.058 | 99.99th=[46400] 00:34:54.058 bw ( KiB/s): min= 1920, max= 2192, per=4.16%, avg=2073.65, stdev=68.66, samples=20 00:34:54.058 iops : min= 480, max= 548, avg=518.30, stdev=17.22, samples=20 00:34:54.058 lat (msec) : 20=0.35%, 50=99.65% 00:34:54.058 cpu : usr=99.15%, sys=0.49%, ctx=54, majf=0, minf=34 00:34:54.058 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename2: (groupid=0, jobs=1): err= 0: pid=3080249: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=518, BW=2075KiB/s (2124kB/s)(20.3MiB/10018msec) 00:34:54.058 slat (usec): min=5, max=100, avg=35.35, stdev=19.44 00:34:54.058 clat (usec): min=18716, max=63175, avg=30518.57, stdev=1692.03 00:34:54.058 lat (usec): min=18725, max=63189, avg=30553.93, stdev=1693.08 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.058 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:54.058 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31327], 95.00th=[31589], 00:34:54.058 | 99.00th=[31851], 99.50th=[38536], 99.90th=[46400], 99.95th=[46400], 00:34:54.058 | 99.99th=[63177] 00:34:54.058 bw ( KiB/s): min= 2016, max= 2176, per=4.16%, avg=2071.25, stdev=54.20, samples=20 00:34:54.058 iops : min= 504, max= 544, avg=517.70, stdev=13.62, samples=20 00:34:54.058 lat (msec) : 20=0.19%, 50=99.77%, 100=0.04% 00:34:54.058 cpu : usr=99.23%, sys=0.47%, ctx=29, majf=0, minf=45 00:34:54.058 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.058 issued rwts: total=5196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.058 filename2: (groupid=0, jobs=1): err= 0: pid=3080250: Fri Jun 7 23:31:14 2024 00:34:54.058 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10023msec) 00:34:54.058 slat (nsec): min=5526, max=91071, avg=10130.55, stdev=9334.54 00:34:54.058 clat (usec): min=21214, max=39427, avg=30756.09, stdev=1131.01 00:34:54.058 lat (usec): min=21230, max=39436, avg=30766.22, stdev=1130.96 00:34:54.058 clat percentiles (usec): 00:34:54.058 | 1.00th=[25560], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:34:54.058 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:34:54.058 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:34:54.058 | 99.00th=[34866], 99.50th=[36963], 99.90th=[38536], 99.95th=[39060], 00:34:54.058 | 99.99th=[39584] 00:34:54.058 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2072.55, stdev=52.49, samples=20 00:34:54.058 iops : min= 510, max= 544, avg=517.95, stdev=13.14, samples=20 00:34:54.058 lat (msec) : 50=100.00% 00:34:54.058 cpu : usr=95.58%, sys=2.29%, ctx=168, majf=0, minf=84 00:34:54.059 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080251: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=518, BW=2072KiB/s (2122kB/s)(20.2MiB/10006msec) 00:34:54.059 slat (nsec): min=5839, max=92331, avg=31909.88, stdev=15546.52 00:34:54.059 clat (usec): min=13262, max=51909, avg=30594.21, stdev=1603.65 00:34:54.059 lat (usec): min=13269, max=51931, avg=30626.12, stdev=1603.22 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.059 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.059 | 99.00th=[31851], 99.50th=[32375], 99.90th=[51643], 99.95th=[51643], 00:34:54.059 | 99.99th=[52167] 00:34:54.059 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2067.58, stdev=63.54, samples=19 00:34:54.059 iops : min= 480, max= 544, avg=516.74, stdev=15.95, samples=19 00:34:54.059 lat (msec) : 20=0.31%, 50=99.38%, 100=0.31% 00:34:54.059 cpu : usr=99.14%, sys=0.57%, ctx=15, majf=0, minf=61 00:34:54.059 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080252: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10004msec) 00:34:54.059 slat (nsec): min=5531, max=95488, avg=32560.12, stdev=17885.78 00:34:54.059 clat (usec): min=13345, max=49518, avg=30606.09, stdev=1616.40 00:34:54.059 lat (usec): min=13369, max=49534, avg=30638.65, stdev=1615.84 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.059 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.059 | 99.00th=[32375], 99.50th=[32637], 99.90th=[49546], 99.95th=[49546], 00:34:54.059 | 99.99th=[49546] 00:34:54.059 bw ( KiB/s): min= 1920, max= 2160, per=4.15%, avg=2067.68, stdev=57.87, samples=19 00:34:54.059 iops : min= 480, max= 540, avg=516.84, stdev=14.50, samples=19 00:34:54.059 lat (msec) : 20=0.31%, 50=99.69% 00:34:54.059 cpu : usr=99.34%, sys=0.35%, ctx=27, majf=0, minf=65 00:34:54.059 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080253: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.3MiB/10014msec) 00:34:54.059 slat (nsec): min=5547, max=87200, avg=19778.77, stdev=12906.26 00:34:54.059 clat (usec): min=12267, max=54520, avg=30712.88, stdev=2262.00 00:34:54.059 lat (usec): min=12276, max=54538, avg=30732.66, stdev=2262.08 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.059 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:34:54.059 | 99.00th=[37487], 99.50th=[46924], 99.90th=[52691], 99.95th=[52691], 00:34:54.059 | 99.99th=[54264] 00:34:54.059 bw ( KiB/s): min= 1984, max= 2176, per=4.15%, avg=2070.21, stdev=58.24, samples=19 00:34:54.059 iops : min= 496, max= 544, avg=517.47, stdev=14.45, samples=19 00:34:54.059 lat (msec) : 20=0.62%, 50=99.19%, 100=0.19% 00:34:54.059 cpu : usr=98.15%, sys=0.94%, ctx=66, majf=0, minf=62 00:34:54.059 IO depths : 1=3.9%, 2=9.5%, 4=23.1%, 8=54.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080254: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=518, BW=2076KiB/s (2126kB/s)(20.3MiB/10020msec) 00:34:54.059 slat (nsec): min=5563, max=54655, avg=11316.65, stdev=6951.45 00:34:54.059 clat (usec): min=10107, max=38177, avg=30736.55, stdev=1442.87 00:34:54.059 lat (usec): min=10115, max=38186, avg=30747.87, stdev=1443.09 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[25035], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.059 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:34:54.059 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[38011], 00:34:54.059 | 99.99th=[38011] 00:34:54.059 bw ( KiB/s): min= 2043, max= 2176, per=4.16%, avg=2072.35, stdev=48.77, samples=20 00:34:54.059 iops : min= 510, max= 544, avg=517.90, stdev=12.20, samples=20 00:34:54.059 lat (msec) : 20=0.27%, 50=99.73% 00:34:54.059 cpu : usr=99.02%, sys=0.64%, ctx=127, majf=0, minf=90 00:34:54.059 IO depths : 1=4.8%, 2=11.1%, 4=24.9%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080255: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10004msec) 00:34:54.059 slat (nsec): min=5553, max=91661, avg=30372.66, stdev=14169.18 00:34:54.059 clat (usec): min=7726, max=49612, avg=30602.90, stdev=1574.66 00:34:54.059 lat (usec): min=7732, max=49629, avg=30633.27, stdev=1574.21 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:54.059 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:34:54.059 | 99.00th=[31851], 99.50th=[32637], 99.90th=[49546], 99.95th=[49546], 00:34:54.059 | 99.99th=[49546] 00:34:54.059 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2067.68, stdev=64.60, samples=19 00:34:54.059 iops : min= 480, max= 544, avg=516.84, stdev=16.18, samples=19 00:34:54.059 lat (msec) : 10=0.04%, 20=0.31%, 50=99.65% 00:34:54.059 cpu : usr=98.99%, sys=0.72%, ctx=7, majf=0, minf=42 00:34:54.059 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 filename2: (groupid=0, jobs=1): err= 0: pid=3080256: Fri Jun 7 23:31:14 2024 00:34:54.059 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10016msec) 00:34:54.059 slat (nsec): min=5533, max=60033, avg=13045.17, stdev=8620.15 00:34:54.059 clat (usec): min=23728, max=42574, avg=30798.68, stdev=1531.02 00:34:54.059 lat (usec): min=23733, max=42589, avg=30811.73, stdev=1530.81 00:34:54.059 clat percentiles (usec): 00:34:54.059 | 1.00th=[25035], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:54.059 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:34:54.059 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[32113], 00:34:54.059 | 99.00th=[36963], 99.50th=[38536], 99.90th=[42730], 99.95th=[42730], 00:34:54.059 | 99.99th=[42730] 00:34:54.059 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2066.95, stdev=59.65, samples=20 00:34:54.059 iops : min= 480, max= 544, avg=516.70, stdev=14.93, samples=20 00:34:54.059 lat (msec) : 50=100.00% 00:34:54.059 cpu : usr=99.22%, sys=0.47%, ctx=52, majf=0, minf=70 00:34:54.059 IO depths : 1=3.9%, 2=9.9%, 4=24.3%, 8=53.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:34:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.059 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:54.059 00:34:54.059 Run status group 0 (all jobs): 00:34:54.059 READ: bw=48.7MiB/s (51.0MB/s), 1981KiB/s-2170KiB/s (2029kB/s-2222kB/s), io=488MiB (512MB), run=10003-10035msec 00:34:54.059 23:31:14 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:54.059 23:31:14 -- target/dif.sh@43 -- # local sub 00:34:54.059 23:31:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.059 23:31:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:54.059 23:31:14 -- target/dif.sh@36 -- # local sub_id=0 00:34:54.059 23:31:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:54.059 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.059 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.059 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.059 23:31:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:54.059 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.059 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.059 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.059 23:31:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.059 23:31:14 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:54.059 23:31:14 -- target/dif.sh@36 -- # local sub_id=1 00:34:54.059 23:31:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.059 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.059 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.059 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.059 23:31:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:54.059 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.059 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.059 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.059 23:31:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:54.059 23:31:14 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:54.059 23:31:14 -- target/dif.sh@36 -- # local sub_id=2 00:34:54.059 23:31:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # numjobs=2 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # iodepth=8 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # runtime=5 00:34:54.060 23:31:14 -- target/dif.sh@115 -- # files=1 00:34:54.060 23:31:14 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:54.060 23:31:14 -- target/dif.sh@28 -- # local sub 00:34:54.060 23:31:14 -- target/dif.sh@30 -- # for sub in "$@" 00:34:54.060 23:31:14 -- target/dif.sh@31 -- # create_subsystem 0 00:34:54.060 23:31:14 -- target/dif.sh@18 -- # local sub_id=0 00:34:54.060 23:31:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 bdev_null0 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 [2024-06-07 23:31:14.899371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@30 -- # for sub in "$@" 00:34:54.060 23:31:14 -- target/dif.sh@31 -- # create_subsystem 1 00:34:54.060 23:31:14 -- target/dif.sh@18 -- # local sub_id=1 00:34:54.060 23:31:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 bdev_null1 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:54.060 23:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:54.060 23:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:54.060 23:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:54.060 23:31:14 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:54.060 23:31:14 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:54.060 23:31:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:54.060 23:31:14 -- nvmf/common.sh@520 -- # config=() 00:34:54.060 23:31:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.060 23:31:14 -- nvmf/common.sh@520 -- # local subsystem config 00:34:54.060 23:31:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:54.060 23:31:14 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.060 23:31:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:54.060 { 00:34:54.060 "params": { 00:34:54.060 "name": "Nvme$subsystem", 00:34:54.060 "trtype": "$TEST_TRANSPORT", 00:34:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.060 "adrfam": "ipv4", 00:34:54.060 "trsvcid": "$NVMF_PORT", 00:34:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.060 "hdgst": ${hdgst:-false}, 00:34:54.060 "ddgst": ${ddgst:-false} 00:34:54.060 }, 00:34:54.060 "method": "bdev_nvme_attach_controller" 00:34:54.060 } 00:34:54.060 EOF 00:34:54.060 )") 00:34:54.060 23:31:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:54.060 23:31:14 -- target/dif.sh@82 -- # gen_fio_conf 00:34:54.060 23:31:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:54.060 23:31:14 -- target/dif.sh@54 -- # local file 00:34:54.060 23:31:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:54.060 23:31:14 -- target/dif.sh@56 -- # cat 00:34:54.060 23:31:14 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.060 23:31:14 -- common/autotest_common.sh@1320 -- # shift 00:34:54.060 23:31:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:54.060 23:31:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.060 23:31:14 -- nvmf/common.sh@542 -- # cat 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:54.060 23:31:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:54.060 23:31:14 -- target/dif.sh@72 -- # (( file <= files )) 00:34:54.060 23:31:14 -- target/dif.sh@73 -- # cat 00:34:54.060 23:31:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:54.060 23:31:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:54.060 { 00:34:54.060 "params": { 00:34:54.060 "name": "Nvme$subsystem", 00:34:54.060 "trtype": "$TEST_TRANSPORT", 00:34:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:54.060 "adrfam": "ipv4", 00:34:54.060 "trsvcid": "$NVMF_PORT", 00:34:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:54.060 "hdgst": ${hdgst:-false}, 00:34:54.060 "ddgst": ${ddgst:-false} 00:34:54.060 }, 00:34:54.060 "method": "bdev_nvme_attach_controller" 00:34:54.060 } 00:34:54.060 EOF 00:34:54.060 )") 00:34:54.060 23:31:14 -- target/dif.sh@72 -- # (( file++ )) 00:34:54.060 23:31:14 -- nvmf/common.sh@542 -- # cat 00:34:54.060 23:31:14 -- target/dif.sh@72 -- # (( file <= files )) 00:34:54.060 23:31:14 -- nvmf/common.sh@544 -- # jq . 00:34:54.060 23:31:14 -- nvmf/common.sh@545 -- # IFS=, 00:34:54.060 23:31:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:54.060 "params": { 00:34:54.060 "name": "Nvme0", 00:34:54.060 "trtype": "tcp", 00:34:54.060 "traddr": "10.0.0.2", 00:34:54.060 "adrfam": "ipv4", 00:34:54.060 "trsvcid": "4420", 00:34:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:54.060 "hdgst": false, 00:34:54.060 "ddgst": false 00:34:54.060 }, 00:34:54.060 "method": "bdev_nvme_attach_controller" 00:34:54.060 },{ 00:34:54.060 "params": { 00:34:54.060 "name": "Nvme1", 00:34:54.060 "trtype": "tcp", 00:34:54.060 "traddr": "10.0.0.2", 00:34:54.060 "adrfam": "ipv4", 00:34:54.060 "trsvcid": "4420", 00:34:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:54.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:54.060 "hdgst": false, 00:34:54.060 "ddgst": false 00:34:54.060 }, 00:34:54.060 "method": "bdev_nvme_attach_controller" 00:34:54.060 }' 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:54.060 23:31:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:54.060 23:31:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:54.060 23:31:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:54.060 23:31:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:54.060 23:31:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:54.060 23:31:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:54.060 23:31:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:54.060 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:54.060 ... 00:34:54.060 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:54.060 ... 00:34:54.060 fio-3.35 00:34:54.060 Starting 4 threads 00:34:54.060 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.060 [2024-06-07 23:31:15.930620] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:54.060 [2024-06-07 23:31:15.930669] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:59.348 00:34:59.348 filename0: (groupid=0, jobs=1): err= 0: pid=3082743: Fri Jun 7 23:31:21 2024 00:34:59.348 read: IOPS=2189, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5003msec) 00:34:59.348 slat (nsec): min=5429, max=46035, avg=8453.63, stdev=2577.10 00:34:59.348 clat (usec): min=1637, max=43990, avg=3631.01, stdev=1223.78 00:34:59.348 lat (usec): min=1645, max=44022, avg=3639.47, stdev=1223.91 00:34:59.348 clat percentiles (usec): 00:34:59.348 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3261], 00:34:59.348 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3589], 00:34:59.348 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 4359], 95.00th=[ 5145], 00:34:59.348 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6390], 99.95th=[43779], 00:34:59.348 | 99.99th=[43779] 00:34:59.348 bw ( KiB/s): min=16032, max=18640, per=25.16%, avg=17587.56, stdev=725.88, samples=9 00:34:59.348 iops : min= 2004, max= 2330, avg=2198.44, stdev=90.73, samples=9 00:34:59.348 lat (msec) : 2=0.08%, 4=87.23%, 10=12.62%, 50=0.07% 00:34:59.348 cpu : usr=96.90%, sys=2.82%, ctx=7, majf=0, minf=0 00:34:59.348 IO depths : 1=0.1%, 2=0.5%, 4=69.3%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.348 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.348 issued rwts: total=10955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.348 filename0: (groupid=0, jobs=1): err= 0: pid=3082744: Fri Jun 7 23:31:21 2024 00:34:59.348 read: IOPS=2214, BW=17.3MiB/s (18.1MB/s)(86.5MiB/5002msec) 00:34:59.348 slat (nsec): min=5350, max=44056, avg=6554.83, stdev=2586.68 00:34:59.348 clat (usec): min=1298, max=6413, avg=3595.07, stdev=571.05 00:34:59.348 lat (usec): min=1304, max=6421, avg=3601.63, stdev=570.76 00:34:59.348 clat percentiles (usec): 00:34:59.348 | 1.00th=[ 2376], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3294], 00:34:59.348 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3589], 00:34:59.348 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 4424], 95.00th=[ 5014], 00:34:59.348 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5866], 99.95th=[ 6194], 00:34:59.348 | 99.99th=[ 6390] 00:34:59.348 bw ( KiB/s): min=17200, max=18052, per=25.26%, avg=17657.33, stdev=338.40, samples=9 00:34:59.348 iops : min= 2150, max= 2256, avg=2207.11, stdev=42.23, samples=9 00:34:59.348 lat (msec) : 2=0.24%, 4=87.30%, 10=12.45% 00:34:59.348 cpu : usr=97.94%, sys=1.82%, ctx=9, majf=0, minf=9 00:34:59.348 IO depths : 1=0.1%, 2=0.6%, 4=71.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.348 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.348 issued rwts: total=11075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.348 filename1: (groupid=0, jobs=1): err= 0: pid=3082745: Fri Jun 7 23:31:21 2024 00:34:59.348 read: IOPS=2189, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5002msec) 00:34:59.348 slat (nsec): min=5352, max=48341, avg=6474.80, stdev=2580.58 00:34:59.348 clat (usec): min=1513, max=6593, avg=3635.79, stdev=579.91 00:34:59.348 lat (usec): min=1519, max=6598, avg=3642.26, stdev=579.63 00:34:59.349 clat percentiles (usec): 00:34:59.349 | 1.00th=[ 2442], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3294], 00:34:59.349 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3589], 00:34:59.349 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 4555], 95.00th=[ 5145], 00:34:59.349 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 6063], 00:34:59.349 | 99.99th=[ 6587] 00:34:59.349 bw ( KiB/s): min=17056, max=17936, per=24.95%, avg=17434.67, stdev=275.39, samples=9 00:34:59.349 iops : min= 2132, max= 2242, avg=2179.33, stdev=34.42, samples=9 00:34:59.349 lat (msec) : 2=0.16%, 4=84.94%, 10=14.90% 00:34:59.349 cpu : usr=97.44%, sys=2.32%, ctx=12, majf=0, minf=9 00:34:59.349 IO depths : 1=0.2%, 2=0.6%, 4=72.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.349 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.349 issued rwts: total=10951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.349 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.349 filename1: (groupid=0, jobs=1): err= 0: pid=3082746: Fri Jun 7 23:31:21 2024 00:34:59.349 read: IOPS=2144, BW=16.8MiB/s (17.6MB/s)(83.8MiB/5001msec) 00:34:59.349 slat (nsec): min=5351, max=49906, avg=6457.12, stdev=2339.78 00:34:59.349 clat (usec): min=1939, max=45457, avg=3713.07, stdev=1295.05 00:34:59.349 lat (usec): min=1945, max=45507, avg=3719.53, stdev=1295.25 00:34:59.349 clat percentiles (usec): 00:34:59.349 | 1.00th=[ 2671], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3294], 00:34:59.349 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3589], 00:34:59.349 | 70.00th=[ 3621], 80.00th=[ 3818], 90.00th=[ 4948], 95.00th=[ 5211], 00:34:59.349 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[45351], 00:34:59.349 | 99.99th=[45351] 00:34:59.349 bw ( KiB/s): min=16176, max=18176, per=24.61%, avg=17200.00, stdev=549.91, samples=9 00:34:59.349 iops : min= 2022, max= 2272, avg=2150.00, stdev=68.74, samples=9 00:34:59.349 lat (msec) : 2=0.03%, 4=82.92%, 10=16.98%, 50=0.07% 00:34:59.349 cpu : usr=97.36%, sys=2.42%, ctx=8, majf=0, minf=0 00:34:59.349 IO depths : 1=0.1%, 2=0.3%, 4=71.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:59.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.349 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:59.349 issued rwts: total=10725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:59.349 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:59.349 00:34:59.349 Run status group 0 (all jobs): 00:34:59.349 READ: bw=68.2MiB/s (71.6MB/s), 16.8MiB/s-17.3MiB/s (17.6MB/s-18.1MB/s), io=341MiB (358MB), run=5001-5003msec 00:34:59.349 23:31:21 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:59.349 23:31:21 -- target/dif.sh@43 -- # local sub 00:34:59.349 23:31:21 -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.349 23:31:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:59.349 23:31:21 -- target/dif.sh@36 -- # local sub_id=0 00:34:59.349 23:31:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@45 -- # for sub in "$@" 00:34:59.349 23:31:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:59.349 23:31:21 -- target/dif.sh@36 -- # local sub_id=1 00:34:59.349 23:31:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 00:34:59.349 real 0m24.249s 00:34:59.349 user 5m14.479s 00:34:59.349 sys 0m3.767s 00:34:59.349 23:31:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 ************************************ 00:34:59.349 END TEST fio_dif_rand_params 00:34:59.349 ************************************ 00:34:59.349 23:31:21 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:59.349 23:31:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:59.349 23:31:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 ************************************ 00:34:59.349 START TEST fio_dif_digest 00:34:59.349 ************************************ 00:34:59.349 23:31:21 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:59.349 23:31:21 -- target/dif.sh@123 -- # local NULL_DIF 00:34:59.349 23:31:21 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:59.349 23:31:21 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:59.349 23:31:21 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:59.349 23:31:21 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:59.349 23:31:21 -- target/dif.sh@127 -- # numjobs=3 00:34:59.349 23:31:21 -- target/dif.sh@127 -- # iodepth=3 00:34:59.349 23:31:21 -- target/dif.sh@127 -- # runtime=10 00:34:59.349 23:31:21 -- target/dif.sh@128 -- # hdgst=true 00:34:59.349 23:31:21 -- target/dif.sh@128 -- # ddgst=true 00:34:59.349 23:31:21 -- target/dif.sh@130 -- # create_subsystems 0 00:34:59.349 23:31:21 -- target/dif.sh@28 -- # local sub 00:34:59.349 23:31:21 -- target/dif.sh@30 -- # for sub in "$@" 00:34:59.349 23:31:21 -- target/dif.sh@31 -- # create_subsystem 0 00:34:59.349 23:31:21 -- target/dif.sh@18 -- # local sub_id=0 00:34:59.349 23:31:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 bdev_null0 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:59.349 23:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:59.349 23:31:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.349 [2024-06-07 23:31:21.316104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.349 23:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:59.349 23:31:21 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:59.349 23:31:21 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:59.349 23:31:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:59.349 23:31:21 -- nvmf/common.sh@520 -- # config=() 00:34:59.349 23:31:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.349 23:31:21 -- nvmf/common.sh@520 -- # local subsystem config 00:34:59.349 23:31:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:59.349 23:31:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.349 23:31:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:59.349 { 00:34:59.349 "params": { 00:34:59.349 "name": "Nvme$subsystem", 00:34:59.349 "trtype": "$TEST_TRANSPORT", 00:34:59.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:59.349 "adrfam": "ipv4", 00:34:59.349 "trsvcid": "$NVMF_PORT", 00:34:59.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:59.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:59.349 "hdgst": ${hdgst:-false}, 00:34:59.349 "ddgst": ${ddgst:-false} 00:34:59.349 }, 00:34:59.349 "method": "bdev_nvme_attach_controller" 00:34:59.349 } 00:34:59.349 EOF 00:34:59.349 )") 00:34:59.349 23:31:21 -- target/dif.sh@82 -- # gen_fio_conf 00:34:59.349 23:31:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:59.349 23:31:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:59.349 23:31:21 -- target/dif.sh@54 -- # local file 00:34:59.349 23:31:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:59.349 23:31:21 -- target/dif.sh@56 -- # cat 00:34:59.349 23:31:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.349 23:31:21 -- common/autotest_common.sh@1320 -- # shift 00:34:59.349 23:31:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:59.349 23:31:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.349 23:31:21 -- nvmf/common.sh@542 -- # cat 00:34:59.349 23:31:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.349 23:31:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:59.349 23:31:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:59.349 23:31:21 -- target/dif.sh@72 -- # (( file <= files )) 00:34:59.349 23:31:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:59.349 23:31:21 -- nvmf/common.sh@544 -- # jq . 00:34:59.349 23:31:21 -- nvmf/common.sh@545 -- # IFS=, 00:34:59.349 23:31:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:59.349 "params": { 00:34:59.349 "name": "Nvme0", 00:34:59.349 "trtype": "tcp", 00:34:59.349 "traddr": "10.0.0.2", 00:34:59.349 "adrfam": "ipv4", 00:34:59.349 "trsvcid": "4420", 00:34:59.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:59.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:59.349 "hdgst": true, 00:34:59.349 "ddgst": true 00:34:59.349 }, 00:34:59.349 "method": "bdev_nvme_attach_controller" 00:34:59.349 }' 00:34:59.350 23:31:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:59.350 23:31:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:59.350 23:31:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:59.350 23:31:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:59.350 23:31:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:59.350 23:31:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:59.350 23:31:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:59.350 23:31:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:59.350 23:31:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.350 23:31:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.350 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:59.350 ... 00:34:59.350 fio-3.35 00:34:59.350 Starting 3 threads 00:34:59.350 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.611 [2024-06-07 23:31:22.134474] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:59.611 [2024-06-07 23:31:22.134520] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:11.841 00:35:11.841 filename0: (groupid=0, jobs=1): err= 0: pid=3083992: Fri Jun 7 23:31:32 2024 00:35:11.841 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10047msec) 00:35:11.841 slat (nsec): min=5617, max=30815, avg=6439.88, stdev=671.95 00:35:11.841 clat (usec): min=8458, max=98376, avg=14927.61, stdev=7667.25 00:35:11.841 lat (usec): min=8465, max=98382, avg=14934.05, stdev=7667.33 00:35:11.841 clat percentiles (usec): 00:35:11.841 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11338], 20.00th=[12518], 00:35:11.841 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:35:11.841 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15664], 95.00th=[16450], 00:35:11.841 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57934], 99.95th=[95945], 00:35:11.841 | 99.99th=[98042] 00:35:11.841 bw ( KiB/s): min=21248, max=28160, per=31.16%, avg=25766.40, stdev=1655.48, samples=20 00:35:11.841 iops : min= 166, max= 220, avg=201.30, stdev=12.93, samples=20 00:35:11.841 lat (msec) : 10=2.38%, 20=94.49%, 50=0.05%, 100=3.08% 00:35:11.841 cpu : usr=96.78%, sys=2.98%, ctx=23, majf=0, minf=92 00:35:11.841 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.841 filename0: (groupid=0, jobs=1): err= 0: pid=3083993: Fri Jun 7 23:31:32 2024 00:35:11.841 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(327MiB/10043msec) 00:35:11.841 slat (nsec): min=5678, max=31629, avg=6462.49, stdev=1161.90 00:35:11.841 clat (usec): min=6189, max=53119, avg=11484.22, stdev=1940.70 00:35:11.841 lat (usec): min=6196, max=53125, avg=11490.68, stdev=1940.65 00:35:11.841 clat percentiles (usec): 00:35:11.841 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9896], 00:35:11.841 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:35:11.841 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:35:11.841 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15401], 99.95th=[49546], 00:35:11.841 | 99.99th=[53216] 00:35:11.841 bw ( KiB/s): min=32000, max=35328, per=40.50%, avg=33484.80, stdev=927.12, samples=20 00:35:11.841 iops : min= 250, max= 276, avg=261.60, stdev= 7.24, samples=20 00:35:11.841 lat (msec) : 10=20.47%, 20=79.45%, 50=0.04%, 100=0.04% 00:35:11.841 cpu : usr=95.85%, sys=3.90%, ctx=21, majf=0, minf=182 00:35:11.841 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 issued rwts: total=2618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.841 filename0: (groupid=0, jobs=1): err= 0: pid=3083994: Fri Jun 7 23:31:32 2024 00:35:11.841 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10044msec) 00:35:11.841 slat (nsec): min=5592, max=29442, avg=6784.60, stdev=1225.13 00:35:11.841 clat (usec): min=8638, max=95076, avg=16194.97, stdev=9008.17 00:35:11.841 lat (usec): min=8650, max=95082, avg=16201.75, stdev=9008.15 00:35:11.841 clat percentiles (usec): 00:35:11.841 | 1.00th=[10028], 5.00th=[11076], 10.00th=[12387], 20.00th=[13304], 00:35:11.841 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:35:11.841 | 70.00th=[15139], 80.00th=[15664], 90.00th=[16450], 95.00th=[18220], 00:35:11.841 | 99.00th=[56361], 99.50th=[56886], 99.90th=[93848], 99.95th=[94897], 00:35:11.841 | 99.99th=[94897] 00:35:11.841 bw ( KiB/s): min=19712, max=26368, per=28.72%, avg=23744.00, stdev=1832.68, samples=20 00:35:11.841 iops : min= 154, max= 206, avg=185.50, stdev=14.32, samples=20 00:35:11.841 lat (msec) : 10=0.86%, 20=94.45%, 50=0.05%, 100=4.63% 00:35:11.841 cpu : usr=94.73%, sys=4.20%, ctx=877, majf=0, minf=143 00:35:11.841 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.841 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.841 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.841 00:35:11.841 Run status group 0 (all jobs): 00:35:11.841 READ: bw=80.7MiB/s (84.7MB/s), 23.1MiB/s-32.6MiB/s (24.2MB/s-34.2MB/s), io=811MiB (851MB), run=10043-10047msec 00:35:11.841 23:31:32 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:11.841 23:31:32 -- target/dif.sh@43 -- # local sub 00:35:11.841 23:31:32 -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.841 23:31:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.841 23:31:32 -- target/dif.sh@36 -- # local sub_id=0 00:35:11.841 23:31:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.841 23:31:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.841 23:31:32 -- common/autotest_common.sh@10 -- # set +x 00:35:11.841 23:31:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.841 23:31:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.841 23:31:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:11.841 23:31:32 -- common/autotest_common.sh@10 -- # set +x 00:35:11.841 23:31:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:11.841 00:35:11.841 real 0m11.144s 00:35:11.841 user 0m41.746s 00:35:11.841 sys 0m1.414s 00:35:11.841 23:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:11.841 23:31:32 -- common/autotest_common.sh@10 -- # set +x 00:35:11.841 ************************************ 00:35:11.841 END TEST fio_dif_digest 00:35:11.841 ************************************ 00:35:11.841 23:31:32 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:11.841 23:31:32 -- target/dif.sh@147 -- # nvmftestfini 00:35:11.841 23:31:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:11.841 23:31:32 -- nvmf/common.sh@116 -- # sync 00:35:11.842 23:31:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:11.842 23:31:32 -- nvmf/common.sh@119 -- # set +e 00:35:11.842 23:31:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:11.842 23:31:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:11.842 rmmod nvme_tcp 00:35:11.842 rmmod nvme_fabrics 00:35:11.842 rmmod nvme_keyring 00:35:11.842 23:31:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:11.842 23:31:32 -- nvmf/common.sh@123 -- # set -e 00:35:11.842 23:31:32 -- nvmf/common.sh@124 -- # return 0 00:35:11.842 23:31:32 -- nvmf/common.sh@477 -- # '[' -n 3073621 ']' 00:35:11.842 23:31:32 -- nvmf/common.sh@478 -- # killprocess 3073621 00:35:11.842 23:31:32 -- common/autotest_common.sh@926 -- # '[' -z 3073621 ']' 00:35:11.842 23:31:32 -- common/autotest_common.sh@930 -- # kill -0 3073621 00:35:11.842 23:31:32 -- common/autotest_common.sh@931 -- # uname 00:35:11.842 23:31:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:11.842 23:31:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3073621 00:35:11.842 23:31:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:11.842 23:31:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:11.842 23:31:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3073621' 00:35:11.842 killing process with pid 3073621 00:35:11.842 23:31:32 -- common/autotest_common.sh@945 -- # kill 3073621 00:35:11.842 23:31:32 -- common/autotest_common.sh@950 -- # wait 3073621 00:35:11.842 23:31:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:11.842 23:31:32 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:13.224 Waiting for block devices as requested 00:35:13.224 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:13.485 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:13.485 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:13.485 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:13.745 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:13.745 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:13.745 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:14.005 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:14.005 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:14.005 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:14.266 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:14.266 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:14.266 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:14.266 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:14.526 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:14.526 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:14.526 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:14.526 23:31:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:14.526 23:31:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:14.526 23:31:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:14.526 23:31:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:14.526 23:31:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.526 23:31:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.526 23:31:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.072 23:31:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:17.072 00:35:17.072 real 1m16.646s 00:35:17.072 user 7m56.572s 00:35:17.072 sys 0m18.972s 00:35:17.072 23:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:17.072 23:31:39 -- common/autotest_common.sh@10 -- # set +x 00:35:17.072 ************************************ 00:35:17.072 END TEST nvmf_dif 00:35:17.072 ************************************ 00:35:17.072 23:31:39 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.072 23:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:17.072 23:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:17.072 23:31:39 -- common/autotest_common.sh@10 -- # set +x 00:35:17.072 ************************************ 00:35:17.072 START TEST nvmf_abort_qd_sizes 00:35:17.072 ************************************ 00:35:17.072 23:31:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.072 * Looking for test storage... 00:35:17.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.072 23:31:39 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.072 23:31:39 -- nvmf/common.sh@7 -- # uname -s 00:35:17.072 23:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.072 23:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.072 23:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.072 23:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.072 23:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.072 23:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.072 23:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.072 23:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.072 23:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.072 23:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.072 23:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.072 23:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:17.072 23:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.072 23:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.072 23:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.072 23:31:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.072 23:31:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.072 23:31:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.072 23:31:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.072 23:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.072 23:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.072 23:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.072 23:31:39 -- paths/export.sh@5 -- # export PATH 00:35:17.072 23:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.072 23:31:39 -- nvmf/common.sh@46 -- # : 0 00:35:17.072 23:31:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:17.072 23:31:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:17.072 23:31:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:17.072 23:31:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.072 23:31:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.072 23:31:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:17.072 23:31:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:17.072 23:31:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:17.072 23:31:39 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:17.072 23:31:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:17.072 23:31:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.072 23:31:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:17.072 23:31:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:17.072 23:31:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:17.072 23:31:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.072 23:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.072 23:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.072 23:31:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:17.072 23:31:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:17.072 23:31:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:17.072 23:31:39 -- common/autotest_common.sh@10 -- # set +x 00:35:23.660 23:31:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:23.660 23:31:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:23.660 23:31:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:23.660 23:31:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:23.660 23:31:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:23.660 23:31:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:23.660 23:31:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:23.660 23:31:45 -- nvmf/common.sh@294 -- # net_devs=() 00:35:23.660 23:31:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:23.660 23:31:45 -- nvmf/common.sh@295 -- # e810=() 00:35:23.660 23:31:45 -- nvmf/common.sh@295 -- # local -ga e810 00:35:23.660 23:31:45 -- nvmf/common.sh@296 -- # x722=() 00:35:23.660 23:31:45 -- nvmf/common.sh@296 -- # local -ga x722 00:35:23.660 23:31:45 -- nvmf/common.sh@297 -- # mlx=() 00:35:23.660 23:31:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:23.660 23:31:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.660 23:31:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:23.660 23:31:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:35:23.660 23:31:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:23.660 23:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:23.660 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:23.660 23:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:23.660 23:31:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:23.660 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:23.660 23:31:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:23.660 23:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.660 23:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.660 23:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:23.660 Found net devices under 0000:31:00.0: cvl_0_0 00:35:23.660 23:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.660 23:31:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:23.660 23:31:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.660 23:31:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.660 23:31:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:23.660 Found net devices under 0000:31:00.1: cvl_0_1 00:35:23.660 23:31:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.660 23:31:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:23.660 23:31:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:23.660 23:31:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:23.660 23:31:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.660 23:31:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.660 23:31:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.660 23:31:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:23.660 23:31:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.660 23:31:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.660 23:31:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:23.660 23:31:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.660 23:31:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.660 23:31:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:23.660 23:31:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:23.660 23:31:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.660 23:31:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.660 23:31:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.660 23:31:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.660 23:31:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:23.660 23:31:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.660 23:31:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.660 23:31:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.660 23:31:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:23.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.832 ms 00:35:23.660 00:35:23.660 --- 10.0.0.2 ping statistics --- 00:35:23.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.660 rtt min/avg/max/mdev = 0.832/0.832/0.832/0.000 ms 00:35:23.660 23:31:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:35:23.660 00:35:23.660 --- 10.0.0.1 ping statistics --- 00:35:23.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.660 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:35:23.660 23:31:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.660 23:31:46 -- nvmf/common.sh@410 -- # return 0 00:35:23.660 23:31:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:23.660 23:31:46 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.867 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:27.867 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:27.867 23:31:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.867 23:31:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:27.867 23:31:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:27.867 23:31:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.867 23:31:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:27.867 23:31:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:27.867 23:31:49 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:27.867 23:31:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:27.867 23:31:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:27.867 23:31:49 -- common/autotest_common.sh@10 -- # set +x 00:35:27.867 23:31:49 -- nvmf/common.sh@469 -- # nvmfpid=3093476 00:35:27.867 23:31:49 -- nvmf/common.sh@470 -- # waitforlisten 3093476 00:35:27.867 23:31:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:27.867 23:31:49 -- common/autotest_common.sh@819 -- # '[' -z 3093476 ']' 00:35:27.867 23:31:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.867 23:31:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:27.867 23:31:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.867 23:31:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:27.867 23:31:49 -- common/autotest_common.sh@10 -- # set +x 00:35:27.867 [2024-06-07 23:31:50.031855] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:35:27.867 [2024-06-07 23:31:50.031917] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.867 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.867 [2024-06-07 23:31:50.107938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.867 [2024-06-07 23:31:50.153792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:27.867 [2024-06-07 23:31:50.153941] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.867 [2024-06-07 23:31:50.153951] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.867 [2024-06-07 23:31:50.153959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.867 [2024-06-07 23:31:50.154148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.867 [2024-06-07 23:31:50.154337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.867 [2024-06-07 23:31:50.154612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.867 [2024-06-07 23:31:50.154614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.129 23:31:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:28.129 23:31:50 -- common/autotest_common.sh@852 -- # return 0 00:35:28.129 23:31:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:28.129 23:31:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:28.129 23:31:50 -- common/autotest_common.sh@10 -- # set +x 00:35:28.390 23:31:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:28.390 23:31:50 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:28.390 23:31:50 -- scripts/common.sh@312 -- # local nvmes 00:35:28.390 23:31:50 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:35:28.390 23:31:50 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:28.390 23:31:50 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:28.390 23:31:50 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:35:28.390 23:31:50 -- scripts/common.sh@322 -- # uname -s 00:35:28.390 23:31:50 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:28.390 23:31:50 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:28.390 23:31:50 -- scripts/common.sh@327 -- # (( 1 )) 00:35:28.390 23:31:50 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:28.390 23:31:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:28.390 23:31:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:28.390 23:31:50 -- common/autotest_common.sh@10 -- # set +x 00:35:28.390 ************************************ 00:35:28.390 START TEST spdk_target_abort 00:35:28.390 ************************************ 00:35:28.390 23:31:50 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:28.390 23:31:50 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:35:28.390 23:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:28.390 23:31:50 -- common/autotest_common.sh@10 -- # set +x 00:35:28.651 spdk_targetn1 00:35:28.651 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:28.651 23:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:28.651 23:31:51 -- common/autotest_common.sh@10 -- # set +x 00:35:28.651 [2024-06-07 23:31:51.168580] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.651 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:28.651 23:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:28.651 23:31:51 -- common/autotest_common.sh@10 -- # set +x 00:35:28.651 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:28.651 23:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:28.651 23:31:51 -- common/autotest_common.sh@10 -- # set +x 00:35:28.651 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:28.651 23:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:28.651 23:31:51 -- common/autotest_common.sh@10 -- # set +x 00:35:28.651 [2024-06-07 23:31:51.196786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.651 23:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:28.651 23:31:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:28.651 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.912 [2024-06-07 23:31:51.408056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:464 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:35:28.912 [2024-06-07 23:31:51.408082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:35:28.912 [2024-06-07 23:31:51.441602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2072 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:35:28.912 [2024-06-07 23:31:51.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:28.912 [2024-06-07 23:31:51.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3104 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:35:28.912 [2024-06-07 23:31:51.464100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0087 p:0 m:0 dnr:0 00:35:32.211 Initializing NVMe Controllers 00:35:32.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:32.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:32.211 Initialization complete. Launching workers. 00:35:32.211 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 15086, failed: 3 00:35:32.211 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3405, failed to submit 11684 00:35:32.211 success 652, unsuccess 2753, failed 0 00:35:32.211 23:31:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.211 23:31:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:32.211 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.211 [2024-06-07 23:31:54.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:672 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.570209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:35:32.211 [2024-06-07 23:31:54.578407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:896 len:8 PRP1 0x200007c44000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.578429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:35:32.211 [2024-06-07 23:31:54.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:1704 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.610385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00d7 p:1 m:0 dnr:0 00:35:32.211 [2024-06-07 23:31:54.677275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:3448 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.677298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00b1 p:0 m:0 dnr:0 00:35:32.211 [2024-06-07 23:31:54.693366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3776 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.693388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00da p:0 m:0 dnr:0 00:35:32.211 [2024-06-07 23:31:54.701209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:3848 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:35:32.211 [2024-06-07 23:31:54.701228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:35:33.150 [2024-06-07 23:31:55.644514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:26080 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:35:33.150 [2024-06-07 23:31:55.644550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00bd p:1 m:0 dnr:0 00:35:34.531 [2024-06-07 23:31:56.894487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:55272 len:8 PRP1 0x200007c64000 PRP2 0x0 00:35:34.531 [2024-06-07 23:31:56.894517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:35:35.100 Initializing NVMe Controllers 00:35:35.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:35.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:35.100 Initialization complete. Launching workers. 00:35:35.100 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8739, failed: 8 00:35:35.100 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1214, failed to submit 7533 00:35:35.100 success 334, unsuccess 880, failed 0 00:35:35.100 23:31:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:35.100 23:31:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:35.100 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.360 [2024-06-07 23:31:57.857215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:158 nsid:1 lba:1984 len:8 PRP1 0x2000078fc000 PRP2 0x0 00:35:35.360 [2024-06-07 23:31:57.857239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:158 cdw0:0 sqhd:00ea p:0 m:0 dnr:0 00:35:35.360 [2024-06-07 23:31:58.008403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:18904 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:35:35.360 [2024-06-07 23:31:58.008423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:0036 p:1 m:0 dnr:0 00:35:37.903 [2024-06-07 23:32:00.102739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:165 nsid:1 lba:260648 len:8 PRP1 0x2000078ea000 PRP2 0x0 00:35:37.903 [2024-06-07 23:32:00.102798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:165 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:35:38.474 Initializing NVMe Controllers 00:35:38.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:38.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:38.474 Initialization complete. Launching workers. 00:35:38.474 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43459, failed: 3 00:35:38.474 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2612, failed to submit 40850 00:35:38.474 success 622, unsuccess 1990, failed 0 00:35:38.474 23:32:00 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:38.474 23:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.474 23:32:00 -- common/autotest_common.sh@10 -- # set +x 00:35:38.474 23:32:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.474 23:32:00 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:38.474 23:32:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:38.474 23:32:00 -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 23:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:40.453 23:32:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 3093476 00:35:40.453 23:32:02 -- common/autotest_common.sh@926 -- # '[' -z 3093476 ']' 00:35:40.453 23:32:02 -- common/autotest_common.sh@930 -- # kill -0 3093476 00:35:40.453 23:32:02 -- common/autotest_common.sh@931 -- # uname 00:35:40.453 23:32:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:40.453 23:32:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3093476 00:35:40.453 23:32:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:40.453 23:32:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:40.453 23:32:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3093476' 00:35:40.453 killing process with pid 3093476 00:35:40.453 23:32:02 -- common/autotest_common.sh@945 -- # kill 3093476 00:35:40.453 23:32:02 -- common/autotest_common.sh@950 -- # wait 3093476 00:35:40.453 00:35:40.453 real 0m12.022s 00:35:40.453 user 0m48.745s 00:35:40.453 sys 0m1.943s 00:35:40.453 23:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:40.453 23:32:02 -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 ************************************ 00:35:40.453 END TEST spdk_target_abort 00:35:40.453 ************************************ 00:35:40.453 23:32:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:40.453 23:32:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:40.453 23:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:40.453 23:32:02 -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 ************************************ 00:35:40.453 START TEST kernel_target_abort 00:35:40.453 ************************************ 00:35:40.453 23:32:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:40.453 23:32:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:40.453 23:32:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:40.453 23:32:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:40.453 23:32:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:40.453 23:32:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:40.453 23:32:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:40.453 23:32:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:40.454 23:32:02 -- nvmf/common.sh@627 -- # local block nvme 00:35:40.454 23:32:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:40.454 23:32:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:40.454 23:32:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:40.454 23:32:02 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:43.754 Waiting for block devices as requested 00:35:43.754 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:44.016 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:44.016 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:44.016 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:44.277 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:44.277 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:44.277 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:44.538 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:44.538 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:44.538 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:44.800 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:44.800 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:44.800 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:45.061 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:45.061 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:45.061 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:45.061 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:45.061 23:32:07 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:45.061 23:32:07 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:45.061 23:32:07 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:45.061 23:32:07 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:45.061 23:32:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:45.322 No valid GPT data, bailing 00:35:45.322 23:32:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:45.322 23:32:07 -- scripts/common.sh@393 -- # pt= 00:35:45.322 23:32:07 -- scripts/common.sh@394 -- # return 1 00:35:45.322 23:32:07 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:45.322 23:32:07 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:45.322 23:32:07 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:45.322 23:32:07 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:45.322 23:32:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:45.322 23:32:07 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:45.322 23:32:07 -- nvmf/common.sh@654 -- # echo 1 00:35:45.322 23:32:07 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:45.322 23:32:07 -- nvmf/common.sh@656 -- # echo 1 00:35:45.322 23:32:07 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:45.322 23:32:07 -- nvmf/common.sh@663 -- # echo tcp 00:35:45.323 23:32:07 -- nvmf/common.sh@664 -- # echo 4420 00:35:45.323 23:32:07 -- nvmf/common.sh@665 -- # echo ipv4 00:35:45.323 23:32:07 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:45.323 23:32:07 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:45.323 00:35:45.323 Discovery Log Number of Records 2, Generation counter 2 00:35:45.323 =====Discovery Log Entry 0====== 00:35:45.323 trtype: tcp 00:35:45.323 adrfam: ipv4 00:35:45.323 subtype: current discovery subsystem 00:35:45.323 treq: not specified, sq flow control disable supported 00:35:45.323 portid: 1 00:35:45.323 trsvcid: 4420 00:35:45.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:45.323 traddr: 10.0.0.1 00:35:45.323 eflags: none 00:35:45.323 sectype: none 00:35:45.323 =====Discovery Log Entry 1====== 00:35:45.323 trtype: tcp 00:35:45.323 adrfam: ipv4 00:35:45.323 subtype: nvme subsystem 00:35:45.323 treq: not specified, sq flow control disable supported 00:35:45.323 portid: 1 00:35:45.323 trsvcid: 4420 00:35:45.323 subnqn: kernel_target 00:35:45.323 traddr: 10.0.0.1 00:35:45.323 eflags: none 00:35:45.323 sectype: none 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.323 23:32:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:45.323 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.631 Initializing NVMe Controllers 00:35:48.631 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:48.631 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:48.631 Initialization complete. Launching workers. 00:35:48.631 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 58267, failed: 0 00:35:48.631 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 58267, failed to submit 0 00:35:48.631 success 0, unsuccess 58267, failed 0 00:35:48.631 23:32:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.631 23:32:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:48.631 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.934 Initializing NVMe Controllers 00:35:51.934 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:51.934 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:51.934 Initialization complete. Launching workers. 00:35:51.934 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 99798, failed: 0 00:35:51.934 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25154, failed to submit 74644 00:35:51.934 success 0, unsuccess 25154, failed 0 00:35:51.934 23:32:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.934 23:32:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:51.934 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.476 Initializing NVMe Controllers 00:35:54.476 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:54.476 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:54.476 Initialization complete. Launching workers. 00:35:54.476 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 96276, failed: 0 00:35:54.476 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24050, failed to submit 72226 00:35:54.476 success 0, unsuccess 24050, failed 0 00:35:54.476 23:32:17 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:54.476 23:32:17 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:54.476 23:32:17 -- nvmf/common.sh@677 -- # echo 0 00:35:54.476 23:32:17 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:54.476 23:32:17 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:54.476 23:32:17 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:54.476 23:32:17 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:54.476 23:32:17 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:54.476 23:32:17 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:54.476 00:35:54.476 real 0m14.172s 00:35:54.476 user 0m7.413s 00:35:54.476 sys 0m3.720s 00:35:54.476 23:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:54.476 23:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:54.476 ************************************ 00:35:54.476 END TEST kernel_target_abort 00:35:54.476 ************************************ 00:35:54.476 23:32:17 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:54.476 23:32:17 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:54.476 23:32:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:54.476 23:32:17 -- nvmf/common.sh@116 -- # sync 00:35:54.476 23:32:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:54.476 23:32:17 -- nvmf/common.sh@119 -- # set +e 00:35:54.476 23:32:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:54.476 23:32:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:54.737 rmmod nvme_tcp 00:35:54.737 rmmod nvme_fabrics 00:35:54.737 rmmod nvme_keyring 00:35:54.737 23:32:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:54.737 23:32:17 -- nvmf/common.sh@123 -- # set -e 00:35:54.737 23:32:17 -- nvmf/common.sh@124 -- # return 0 00:35:54.737 23:32:17 -- nvmf/common.sh@477 -- # '[' -n 3093476 ']' 00:35:54.737 23:32:17 -- nvmf/common.sh@478 -- # killprocess 3093476 00:35:54.737 23:32:17 -- common/autotest_common.sh@926 -- # '[' -z 3093476 ']' 00:35:54.737 23:32:17 -- common/autotest_common.sh@930 -- # kill -0 3093476 00:35:54.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3093476) - No such process 00:35:54.737 23:32:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3093476 is not found' 00:35:54.737 Process with pid 3093476 is not found 00:35:54.737 23:32:17 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:54.737 23:32:17 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.944 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:58.944 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:58.944 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:58.944 23:32:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:58.944 23:32:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:58.944 23:32:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:58.944 23:32:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:58.944 23:32:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.944 23:32:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:58.944 23:32:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.859 23:32:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:00.859 00:36:00.859 real 0m43.946s 00:36:00.859 user 1m1.181s 00:36:00.859 sys 0m15.867s 00:36:00.859 23:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.859 23:32:23 -- common/autotest_common.sh@10 -- # set +x 00:36:00.859 ************************************ 00:36:00.859 END TEST nvmf_abort_qd_sizes 00:36:00.859 ************************************ 00:36:00.859 23:32:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:00.859 23:32:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:00.859 23:32:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:00.859 23:32:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:00.859 23:32:23 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:00.859 23:32:23 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:00.859 23:32:23 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:00.859 23:32:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:00.859 23:32:23 -- common/autotest_common.sh@10 -- # set +x 00:36:00.859 23:32:23 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:00.859 23:32:23 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:00.859 23:32:23 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:00.859 23:32:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.000 INFO: APP EXITING 00:36:09.000 INFO: killing all VMs 00:36:09.000 INFO: killing vhost app 00:36:09.000 INFO: EXIT DONE 00:36:11.546 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:65:00.0 (144d a80a): Already using the nvme driver 00:36:11.546 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:11.546 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:11.807 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:11.807 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:11.807 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:11.807 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:11.807 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:11.808 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:15.113 Cleaning 00:36:15.113 Removing: /var/run/dpdk/spdk0/config 00:36:15.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:15.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:15.374 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:15.374 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:15.374 Removing: /var/run/dpdk/spdk1/config 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:15.374 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:15.374 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:15.374 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:15.374 Removing: /var/run/dpdk/spdk2/config 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:15.374 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:15.374 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:15.374 Removing: /var/run/dpdk/spdk3/config 00:36:15.374 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:15.375 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:15.375 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:15.375 Removing: /var/run/dpdk/spdk4/config 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:15.375 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:15.375 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:15.375 Removing: /dev/shm/bdev_svc_trace.1 00:36:15.375 Removing: /dev/shm/nvmf_trace.0 00:36:15.375 Removing: /dev/shm/spdk_tgt_trace.pid2613816 00:36:15.375 Removing: /var/run/dpdk/spdk0 00:36:15.375 Removing: /var/run/dpdk/spdk1 00:36:15.375 Removing: /var/run/dpdk/spdk2 00:36:15.375 Removing: /var/run/dpdk/spdk3 00:36:15.375 Removing: /var/run/dpdk/spdk4 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2612337 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2613816 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2614531 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2615511 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2616299 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2616661 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2616905 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2617167 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2617550 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2617909 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2618122 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2618330 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2619378 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2622663 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2623027 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2623399 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2623632 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2624110 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2624120 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2624647 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2624837 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2625225 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2625273 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2625683 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2625714 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2626221 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2626490 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2626880 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2627271 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2627411 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2627549 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2628075 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2628337 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2628481 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2628832 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2629166 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2629352 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2629540 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2629891 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2630227 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2630373 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2630600 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2630949 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2631278 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2631404 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2631659 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2632011 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2632302 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2632422 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2632713 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2633068 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2633288 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2633454 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2633774 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2634129 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2634300 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2634493 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2634827 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2635178 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2635294 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2635555 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2635889 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2636201 00:36:15.637 Removing: /var/run/dpdk/spdk_pid2636322 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2636618 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2636959 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2637285 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2637411 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2637698 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2638032 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2638380 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2638455 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2638856 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2643386 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2741255 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2746520 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2758257 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2764756 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2770115 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2770916 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2778212 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2778216 00:36:15.898 Removing: /var/run/dpdk/spdk_pid2779228 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2780252 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2781269 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2781949 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2781951 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2782290 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2782325 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2782456 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2783500 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2784511 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2785614 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2786232 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2786361 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2786611 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2787825 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2789225 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2799239 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2799686 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2804674 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2811725 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2814688 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2827497 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2838272 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2840303 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2841332 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2861865 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2866372 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2871854 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2874219 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2876474 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2876812 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2876915 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2877182 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2877907 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2880007 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2881039 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2881745 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2888259 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2895032 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2900778 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2945986 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2950861 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2958182 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2959690 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2961303 00:36:15.899 Removing: /var/run/dpdk/spdk_pid2966964 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2971975 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2981002 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2981019 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2986133 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2986473 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2986659 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2987154 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2987165 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2988536 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2990563 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2992481 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2994357 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2996319 00:36:16.160 Removing: /var/run/dpdk/spdk_pid2998352 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3005845 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3006424 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3007565 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3008755 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3015129 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3018876 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3025293 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3032075 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3039127 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3039896 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3040662 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3041356 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3042372 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3043114 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3043821 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3044512 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3049643 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3049971 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3057125 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3057372 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3060021 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3067800 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3067806 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3073802 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3076195 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3078557 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3079935 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3082309 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3083842 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3093667 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3094337 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3095007 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3097792 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3098370 00:36:16.160 Removing: /var/run/dpdk/spdk_pid3099045 00:36:16.160 Clean 00:36:16.421 killing process with pid 2555652 00:36:26.465 killing process with pid 2555649 00:36:26.465 killing process with pid 2555651 00:36:26.465 killing process with pid 2555650 00:36:26.465 23:32:49 -- common/autotest_common.sh@1436 -- # return 0 00:36:26.465 23:32:49 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:26.465 23:32:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:26.465 23:32:49 -- common/autotest_common.sh@10 -- # set +x 00:36:26.725 23:32:49 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:26.725 23:32:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:26.725 23:32:49 -- common/autotest_common.sh@10 -- # set +x 00:36:26.725 23:32:49 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:26.725 23:32:49 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:26.725 23:32:49 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:26.725 23:32:49 -- spdk/autotest.sh@394 -- # hash lcov 00:36:26.725 23:32:49 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:26.725 23:32:49 -- spdk/autotest.sh@396 -- # hostname 00:36:26.725 23:32:49 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:26.725 geninfo: WARNING: invalid characters removed from testname! 00:36:53.305 23:33:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.305 23:33:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.876 23:33:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.787 23:33:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.174 23:33:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:58.559 23:33:21 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:59.944 23:33:22 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:00.206 23:33:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.206 23:33:22 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:00.206 23:33:22 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.206 23:33:22 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.206 23:33:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.206 23:33:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.206 23:33:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.206 23:33:22 -- paths/export.sh@5 -- $ export PATH 00:37:00.206 23:33:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.206 23:33:22 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:00.206 23:33:22 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:00.206 23:33:22 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717796002.XXXXXX 00:37:00.206 23:33:22 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717796002.8L5Mxf 00:37:00.206 23:33:22 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:00.206 23:33:22 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:37:00.206 23:33:22 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:00.206 23:33:22 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:00.206 23:33:22 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:00.206 23:33:22 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:00.206 23:33:22 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:00.206 23:33:22 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:00.206 23:33:22 -- common/autotest_common.sh@10 -- $ set +x 00:37:00.206 23:33:22 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:00.206 23:33:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:37:00.206 23:33:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:00.206 23:33:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:00.206 23:33:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:00.206 23:33:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:00.206 23:33:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:00.206 23:33:22 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:00.206 23:33:22 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:00.206 23:33:22 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:00.206 23:33:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:00.206 + [[ -n 2501233 ]] 00:37:00.206 + sudo kill 2501233 00:37:00.217 [Pipeline] } 00:37:00.234 [Pipeline] // stage 00:37:00.240 [Pipeline] } 00:37:00.256 [Pipeline] // timeout 00:37:00.261 [Pipeline] } 00:37:00.276 [Pipeline] // catchError 00:37:00.281 [Pipeline] } 00:37:00.298 [Pipeline] // wrap 00:37:00.303 [Pipeline] } 00:37:00.317 [Pipeline] // catchError 00:37:00.325 [Pipeline] stage 00:37:00.327 [Pipeline] { (Epilogue) 00:37:00.339 [Pipeline] catchError 00:37:00.341 [Pipeline] { 00:37:00.354 [Pipeline] echo 00:37:00.355 Cleanup processes 00:37:00.360 [Pipeline] sh 00:37:00.643 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:00.643 3116089 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:00.657 [Pipeline] sh 00:37:00.943 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:00.943 ++ grep -v 'sudo pgrep' 00:37:00.943 ++ awk '{print $1}' 00:37:00.943 + sudo kill -9 00:37:00.943 + true 00:37:00.956 [Pipeline] sh 00:37:01.242 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:13.481 [Pipeline] sh 00:37:13.767 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:13.767 Artifacts sizes are good 00:37:13.782 [Pipeline] archiveArtifacts 00:37:13.789 Archiving artifacts 00:37:14.045 [Pipeline] sh 00:37:14.384 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:14.398 [Pipeline] cleanWs 00:37:14.407 [WS-CLEANUP] Deleting project workspace... 00:37:14.408 [WS-CLEANUP] Deferred wipeout is used... 00:37:14.415 [WS-CLEANUP] done 00:37:14.417 [Pipeline] } 00:37:14.436 [Pipeline] // catchError 00:37:14.447 [Pipeline] sh 00:37:14.733 + logger -p user.info -t JENKINS-CI 00:37:14.743 [Pipeline] } 00:37:14.759 [Pipeline] // stage 00:37:14.765 [Pipeline] } 00:37:14.782 [Pipeline] // node 00:37:14.788 [Pipeline] End of Pipeline 00:37:14.830 Finished: SUCCESS